00:00:00.001 Started by upstream project "autotest-per-patch" build number 132316 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.140 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.248 > git --version # 'git version 2.39.2' 00:00:00.248 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.189 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.201 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.212 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.212 > git config core.sparsecheckout # timeout=10 00:00:06.223 > git read-tree -mu HEAD # timeout=10 00:00:06.241 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.260 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.260 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.344 [Pipeline] Start of Pipeline 00:00:06.359 [Pipeline] library 00:00:06.362 Loading library shm_lib@master 00:00:06.362 Library shm_lib@master is cached. Copying from home. 00:00:06.377 [Pipeline] node 00:00:06.387 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.389 [Pipeline] { 00:00:06.400 [Pipeline] catchError 00:00:06.402 [Pipeline] { 00:00:06.415 [Pipeline] wrap 00:00:06.424 [Pipeline] { 00:00:06.433 [Pipeline] stage 00:00:06.435 [Pipeline] { (Prologue) 00:00:06.674 [Pipeline] sh 00:00:06.959 + logger -p user.info -t JENKINS-CI 00:00:06.978 [Pipeline] echo 00:00:06.980 Node: WFP8 00:00:06.987 [Pipeline] sh 00:00:07.279 [Pipeline] setCustomBuildProperty 00:00:07.288 [Pipeline] echo 00:00:07.289 Cleanup processes 00:00:07.292 [Pipeline] sh 00:00:07.573 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.573 1414721 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.585 [Pipeline] sh 00:00:07.868 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.868 ++ grep -v 'sudo pgrep' 00:00:07.868 ++ awk '{print $1}' 00:00:07.868 + sudo kill -9 00:00:07.868 + true 00:00:07.882 [Pipeline] cleanWs 00:00:07.892 [WS-CLEANUP] Deleting project workspace... 00:00:07.892 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.898 [WS-CLEANUP] done 00:00:07.902 [Pipeline] setCustomBuildProperty 00:00:07.914 [Pipeline] sh 00:00:08.199 + sudo git config --global --replace-all safe.directory '*' 00:00:08.289 [Pipeline] httpRequest 00:00:08.822 [Pipeline] echo 00:00:08.823 Sorcerer 10.211.164.20 is alive 00:00:08.831 [Pipeline] retry 00:00:08.833 [Pipeline] { 00:00:08.847 [Pipeline] httpRequest 00:00:08.850 HttpMethod: GET 00:00:08.851 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.851 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.867 Response Code: HTTP/1.1 200 OK 00:00:08.867 Success: Status code 200 is in the accepted range: 200,404 00:00:08.868 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.945 [Pipeline] } 00:00:12.962 [Pipeline] // retry 00:00:12.970 [Pipeline] sh 00:00:13.257 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.275 [Pipeline] httpRequest 00:00:13.674 [Pipeline] echo 00:00:13.676 Sorcerer 10.211.164.20 is alive 00:00:13.686 [Pipeline] retry 00:00:13.688 [Pipeline] { 00:00:13.701 [Pipeline] httpRequest 00:00:13.706 HttpMethod: GET 00:00:13.706 URL: http://10.211.164.20/packages/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz 00:00:13.708 Sending request to url: http://10.211.164.20/packages/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz 00:00:13.733 Response Code: HTTP/1.1 200 OK 00:00:13.733 Success: Status code 200 is in the accepted range: 200,404 00:00:13.734 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz 00:02:05.587 [Pipeline] } 00:02:05.606 [Pipeline] // retry 00:02:05.614 [Pipeline] sh 00:02:05.900 + tar --no-same-owner -xf spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz 00:02:08.459 [Pipeline] sh 00:02:08.739 + git -C spdk log --oneline -n5 00:02:08.739 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:02:08.739 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:02:08.739 03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:02:08.739 d47eb51c9 bdev: fix a race between reset start and complete 00:02:08.739 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:08.750 [Pipeline] } 00:02:08.765 [Pipeline] // stage 00:02:08.774 [Pipeline] stage 00:02:08.776 [Pipeline] { (Prepare) 00:02:08.794 [Pipeline] writeFile 00:02:08.812 [Pipeline] sh 00:02:09.097 + logger -p user.info -t JENKINS-CI 00:02:09.110 [Pipeline] sh 00:02:09.395 + logger -p user.info -t JENKINS-CI 00:02:09.407 [Pipeline] sh 00:02:09.693 + cat autorun-spdk.conf 00:02:09.693 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.693 SPDK_TEST_NVMF=1 00:02:09.693 SPDK_TEST_NVME_CLI=1 00:02:09.693 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.693 SPDK_TEST_NVMF_NICS=e810 00:02:09.693 SPDK_TEST_VFIOUSER=1 00:02:09.693 SPDK_RUN_UBSAN=1 00:02:09.693 NET_TYPE=phy 00:02:09.701 RUN_NIGHTLY=0 00:02:09.706 [Pipeline] readFile 00:02:09.731 [Pipeline] withEnv 00:02:09.733 [Pipeline] { 00:02:09.748 [Pipeline] sh 00:02:10.040 + set -ex 00:02:10.043 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:10.043 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.043 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.043 ++ SPDK_TEST_NVMF=1 00:02:10.043 ++ SPDK_TEST_NVME_CLI=1 00:02:10.043 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.043 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.043 ++ SPDK_TEST_VFIOUSER=1 00:02:10.043 ++ SPDK_RUN_UBSAN=1 00:02:10.043 ++ NET_TYPE=phy 00:02:10.043 ++ RUN_NIGHTLY=0 00:02:10.043 + case $SPDK_TEST_NVMF_NICS in 00:02:10.043 + DRIVERS=ice 00:02:10.043 + [[ tcp == \r\d\m\a ]] 00:02:10.043 + [[ -n ice ]] 00:02:10.043 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:10.043 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:10.043 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:10.043 rmmod: ERROR: Module irdma is not currently loaded 00:02:10.043 rmmod: ERROR: Module i40iw is not currently loaded 00:02:10.043 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:10.043 + true 00:02:10.043 + for D in $DRIVERS 00:02:10.043 + sudo modprobe ice 00:02:10.043 + exit 0 00:02:10.053 [Pipeline] } 00:02:10.071 [Pipeline] // withEnv 00:02:10.077 [Pipeline] } 00:02:10.094 [Pipeline] // stage 00:02:10.105 [Pipeline] catchError 00:02:10.107 [Pipeline] { 00:02:10.120 [Pipeline] timeout 00:02:10.120 Timeout set to expire in 1 hr 0 min 00:02:10.122 [Pipeline] { 00:02:10.137 [Pipeline] stage 00:02:10.139 [Pipeline] { (Tests) 00:02:10.155 [Pipeline] sh 00:02:10.442 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:10.442 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:10.442 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:10.442 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:10.442 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.442 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:10.442 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:10.442 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:10.442 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:10.442 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:10.442 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:10.442 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:10.442 + source /etc/os-release 00:02:10.442 ++ NAME='Fedora Linux' 00:02:10.442 ++ VERSION='39 (Cloud Edition)' 00:02:10.442 ++ ID=fedora 00:02:10.442 ++ VERSION_ID=39 00:02:10.442 ++ VERSION_CODENAME= 00:02:10.442 ++ PLATFORM_ID=platform:f39 00:02:10.442 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:10.442 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:10.442 ++ LOGO=fedora-logo-icon 00:02:10.442 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:10.442 ++ HOME_URL=https://fedoraproject.org/ 00:02:10.442 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:10.442 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:10.442 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:10.442 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:10.442 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:10.442 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:10.442 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:10.442 ++ SUPPORT_END=2024-11-12 00:02:10.442 ++ VARIANT='Cloud Edition' 00:02:10.442 ++ VARIANT_ID=cloud 00:02:10.442 + uname -a 00:02:10.442 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:10.442 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:12.981 Hugepages 00:02:12.981 node hugesize free / total 00:02:12.981 node0 1048576kB 0 / 0 00:02:12.981 node0 2048kB 0 / 0 00:02:12.981 node1 1048576kB 0 / 0 00:02:12.981 node1 2048kB 0 / 0 00:02:12.981 00:02:12.981 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:12.981 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:12.981 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:12.981 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:12.981 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:12.982 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:12.982 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:12.982 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:12.982 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:12.982 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:12.982 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:12.982 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:12.982 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:12.982 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:12.982 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:12.982 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:12.982 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:12.982 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:12.982 + rm -f /tmp/spdk-ld-path 00:02:12.982 + source autorun-spdk.conf 00:02:12.982 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.982 ++ SPDK_TEST_NVMF=1 00:02:12.982 ++ SPDK_TEST_NVME_CLI=1 00:02:12.982 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.982 ++ SPDK_TEST_NVMF_NICS=e810 00:02:12.982 ++ SPDK_TEST_VFIOUSER=1 00:02:12.982 ++ SPDK_RUN_UBSAN=1 00:02:12.982 ++ NET_TYPE=phy 00:02:12.982 ++ RUN_NIGHTLY=0 00:02:12.982 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:12.982 + [[ -n '' ]] 00:02:12.982 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.982 + for M in /var/spdk/build-*-manifest.txt 00:02:12.982 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:12.982 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:12.982 + for M in /var/spdk/build-*-manifest.txt 00:02:12.982 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:12.982 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:12.982 + for M in /var/spdk/build-*-manifest.txt 00:02:12.982 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:12.982 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:12.982 ++ uname 00:02:12.982 + [[ Linux == \L\i\n\u\x ]] 00:02:12.982 + sudo dmesg -T 00:02:13.241 + sudo dmesg --clear 00:02:13.241 + dmesg_pid=1415644 00:02:13.241 + [[ Fedora Linux == FreeBSD ]] 00:02:13.241 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.241 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.241 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:13.241 + [[ -x /usr/src/fio-static/fio ]] 00:02:13.241 + export FIO_BIN=/usr/src/fio-static/fio 00:02:13.241 + FIO_BIN=/usr/src/fio-static/fio 00:02:13.241 + sudo dmesg -Tw 00:02:13.241 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:13.241 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:13.241 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:13.241 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.241 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.241 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:13.241 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.241 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.241 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.241 10:29:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:13.242 10:29:20 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:13.242 10:29:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:13.242 10:29:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:13.242 10:29:20 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.242 10:29:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:13.242 10:29:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:13.242 10:29:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:13.242 10:29:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:13.242 10:29:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.242 10:29:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.242 10:29:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.242 10:29:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.242 10:29:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.242 10:29:20 -- paths/export.sh@5 -- $ export PATH 00:02:13.242 10:29:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.242 10:29:20 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:13.242 10:29:20 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:13.242 10:29:20 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732008560.XXXXXX 00:02:13.242 10:29:20 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732008560.ZMA9Wt 00:02:13.242 10:29:20 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:13.242 10:29:20 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:13.242 10:29:20 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:13.242 10:29:20 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:13.242 10:29:20 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:13.242 10:29:20 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:13.242 10:29:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:13.242 10:29:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.242 10:29:20 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:13.242 10:29:20 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:13.242 10:29:20 -- pm/common@17 -- $ local monitor 00:02:13.242 10:29:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.242 10:29:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.242 10:29:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.242 10:29:20 -- pm/common@21 -- $ date +%s 00:02:13.242 10:29:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.242 10:29:20 -- pm/common@21 -- $ date +%s 00:02:13.242 10:29:20 -- pm/common@25 -- $ sleep 1 00:02:13.242 10:29:20 -- pm/common@21 -- $ date +%s 00:02:13.242 10:29:20 -- pm/common@21 -- $ date +%s 00:02:13.242 10:29:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008560 00:02:13.242 10:29:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008560 00:02:13.242 10:29:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008560 00:02:13.242 10:29:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008560 00:02:13.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008560_collect-cpu-load.pm.log 00:02:13.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008560_collect-vmstat.pm.log 00:02:13.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008560_collect-cpu-temp.pm.log 00:02:13.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008560_collect-bmc-pm.bmc.pm.log 00:02:14.491 10:29:21 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:14.491 10:29:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.491 10:29:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.491 10:29:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.491 10:29:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.491 Tue Nov 19 09:29:21 AM UTC 2024 00:02:14.491 10:29:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.491 v25.01-pre-193-ga0c128549 00:02:14.491 10:29:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:14.491 10:29:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:14.491 10:29:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:14.491 10:29:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:14.491 10:29:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.491 10:29:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.491 ************************************ 00:02:14.491 START TEST ubsan 00:02:14.491 ************************************ 00:02:14.491 10:29:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:14.491 using ubsan 00:02:14.491 00:02:14.491 real 0m0.000s 00:02:14.491 user 0m0.000s 00:02:14.491 sys 0m0.000s 00:02:14.491 10:29:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:14.491 10:29:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:14.491 ************************************ 00:02:14.491 END TEST ubsan 00:02:14.491 ************************************ 00:02:14.491 10:29:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:14.491 10:29:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:14.491 10:29:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:14.491 10:29:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:14.491 10:29:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:14.491 10:29:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:14.491 10:29:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:14.491 10:29:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:14.491 10:29:21 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:14.491 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:14.491 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:15.060 Using 'verbs' RDMA provider 00:02:27.848 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:40.066 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:40.066 Creating mk/config.mk...done. 00:02:40.066 Creating mk/cc.flags.mk...done. 00:02:40.066 Type 'make' to build. 00:02:40.066 10:29:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:40.066 10:29:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.066 10:29:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.066 10:29:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.066 ************************************ 00:02:40.066 START TEST make 00:02:40.066 ************************************ 00:02:40.066 10:29:47 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:40.325 make[1]: Nothing to be done for 'all'. 00:02:41.704 The Meson build system 00:02:41.704 Version: 1.5.0 00:02:41.704 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:41.704 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:41.704 Build type: native build 00:02:41.704 Project name: libvfio-user 00:02:41.704 Project version: 0.0.1 00:02:41.704 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.704 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.704 Host machine cpu family: x86_64 00:02:41.704 Host machine cpu: x86_64 00:02:41.704 Run-time dependency threads found: YES 00:02:41.704 Library dl found: YES 00:02:41.704 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.704 Run-time dependency json-c found: YES 0.17 00:02:41.704 Run-time dependency cmocka found: YES 1.1.7 00:02:41.704 Program pytest-3 found: NO 00:02:41.704 Program flake8 found: NO 00:02:41.704 Program misspell-fixer found: NO 00:02:41.704 Program restructuredtext-lint found: NO 00:02:41.704 Program valgrind found: YES (/usr/bin/valgrind) 00:02:41.704 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.704 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.704 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.704 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:41.704 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:41.704 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:41.704 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:41.704 Build targets in project: 8 00:02:41.704 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:41.704 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:41.704 00:02:41.704 libvfio-user 0.0.1 00:02:41.704 00:02:41.704 User defined options 00:02:41.704 buildtype : debug 00:02:41.704 default_library: shared 00:02:41.704 libdir : /usr/local/lib 00:02:41.704 00:02:41.704 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.275 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:42.275 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:42.275 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:42.275 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:42.275 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:42.275 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:42.275 [6/37] Compiling C object samples/null.p/null.c.o 00:02:42.275 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:42.275 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:42.275 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:42.275 [10/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:42.275 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:42.275 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:42.276 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:42.276 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:42.276 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:42.276 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:42.276 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:42.276 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:42.276 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:42.276 [20/37] Compiling C object samples/server.p/server.c.o 00:02:42.276 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:42.534 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:42.534 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:42.534 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:42.534 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:42.534 [26/37] Compiling C object samples/client.p/client.c.o 00:02:42.534 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:42.534 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:42.534 [29/37] Linking target samples/client 00:02:42.534 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:42.534 [31/37] Linking target test/unit_tests 00:02:42.534 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:42.793 [33/37] Linking target samples/server 00:02:42.793 [34/37] Linking target samples/lspci 00:02:42.793 [35/37] Linking target samples/gpio-pci-idio-16 00:02:42.793 [36/37] Linking target samples/null 00:02:42.793 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:42.793 INFO: autodetecting backend as ninja 00:02:42.793 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:42.793 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:43.053 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:43.053 ninja: no work to do. 00:02:48.329 The Meson build system 00:02:48.329 Version: 1.5.0 00:02:48.329 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:48.329 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:48.329 Build type: native build 00:02:48.329 Program cat found: YES (/usr/bin/cat) 00:02:48.329 Project name: DPDK 00:02:48.329 Project version: 24.03.0 00:02:48.329 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.329 C linker for the host machine: cc ld.bfd 2.40-14 00:02:48.329 Host machine cpu family: x86_64 00:02:48.329 Host machine cpu: x86_64 00:02:48.329 Message: ## Building in Developer Mode ## 00:02:48.329 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.329 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.329 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.329 Program python3 found: YES (/usr/bin/python3) 00:02:48.329 Program cat found: YES (/usr/bin/cat) 00:02:48.329 Compiler for C supports arguments -march=native: YES 00:02:48.329 Checking for size of "void *" : 8 00:02:48.329 Checking for size of "void *" : 8 (cached) 00:02:48.329 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:48.329 Library m found: YES 00:02:48.329 Library numa found: YES 00:02:48.329 Has header "numaif.h" : YES 00:02:48.329 Library fdt found: NO 00:02:48.329 Library execinfo found: NO 00:02:48.329 Has header "execinfo.h" : YES 00:02:48.329 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.329 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.329 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.329 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.329 Run-time dependency openssl found: YES 3.1.1 00:02:48.329 Run-time dependency libpcap found: YES 1.10.4 00:02:48.329 Has header "pcap.h" with dependency libpcap: YES 00:02:48.329 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.329 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.329 Compiler for C supports arguments -Wformat: YES 00:02:48.329 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.329 Compiler for C supports arguments -Wformat-security: NO 00:02:48.329 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.329 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.329 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.329 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.329 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.329 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.329 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.329 Compiler for C supports arguments -Wundef: YES 00:02:48.329 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.329 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.329 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.329 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.329 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.329 Program objdump found: YES (/usr/bin/objdump) 00:02:48.329 Compiler for C supports arguments -mavx512f: YES 00:02:48.329 Checking if "AVX512 checking" compiles: YES 00:02:48.329 Fetching value of define "__SSE4_2__" : 1 00:02:48.329 Fetching value of define "__AES__" : 1 00:02:48.329 Fetching value of define "__AVX__" : 1 00:02:48.329 Fetching value of define "__AVX2__" : 1 00:02:48.329 Fetching value of define "__AVX512BW__" : 1 00:02:48.329 Fetching value of define "__AVX512CD__" : 1 00:02:48.329 Fetching value of define "__AVX512DQ__" : 1 00:02:48.329 Fetching value of define "__AVX512F__" : 1 00:02:48.329 Fetching value of define "__AVX512VL__" : 1 00:02:48.329 Fetching value of define "__PCLMUL__" : 1 00:02:48.329 Fetching value of define "__RDRND__" : 1 00:02:48.329 Fetching value of define "__RDSEED__" : 1 00:02:48.329 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:48.329 Fetching value of define "__znver1__" : (undefined) 00:02:48.329 Fetching value of define "__znver2__" : (undefined) 00:02:48.329 Fetching value of define "__znver3__" : (undefined) 00:02:48.329 Fetching value of define "__znver4__" : (undefined) 00:02:48.329 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.329 Message: lib/log: Defining dependency "log" 00:02:48.329 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.329 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.329 Checking for function "getentropy" : NO 00:02:48.329 Message: lib/eal: Defining dependency "eal" 00:02:48.329 Message: lib/ring: Defining dependency "ring" 00:02:48.329 Message: lib/rcu: Defining dependency "rcu" 00:02:48.329 Message: lib/mempool: Defining dependency "mempool" 00:02:48.329 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.329 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.329 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.329 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.329 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.329 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:48.329 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:48.329 Compiler for C supports arguments -mpclmul: YES 00:02:48.329 Compiler for C supports arguments -maes: YES 00:02:48.329 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.329 Compiler for C supports arguments -mavx512bw: YES 00:02:48.329 Compiler for C supports arguments -mavx512dq: YES 00:02:48.329 Compiler for C supports arguments -mavx512vl: YES 00:02:48.329 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.329 Compiler for C supports arguments -mavx2: YES 00:02:48.329 Compiler for C supports arguments -mavx: YES 00:02:48.329 Message: lib/net: Defining dependency "net" 00:02:48.329 Message: lib/meter: Defining dependency "meter" 00:02:48.329 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.329 Message: lib/pci: Defining dependency "pci" 00:02:48.329 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.329 Message: lib/hash: Defining dependency "hash" 00:02:48.329 Message: lib/timer: Defining dependency "timer" 00:02:48.329 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.329 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.329 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.329 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.329 Message: lib/power: Defining dependency "power" 00:02:48.329 Message: lib/reorder: Defining dependency "reorder" 00:02:48.329 Message: lib/security: Defining dependency "security" 00:02:48.329 Has header "linux/userfaultfd.h" : YES 00:02:48.329 Has header "linux/vduse.h" : YES 00:02:48.329 Message: lib/vhost: Defining dependency "vhost" 00:02:48.329 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.329 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.329 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.329 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.329 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.329 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.329 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.329 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.329 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.329 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.329 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:48.329 Configuring doxy-api-html.conf using configuration 00:02:48.329 Configuring doxy-api-man.conf using configuration 00:02:48.329 Program mandb found: YES (/usr/bin/mandb) 00:02:48.329 Program sphinx-build found: NO 00:02:48.329 Configuring rte_build_config.h using configuration 00:02:48.329 Message: 00:02:48.330 ================= 00:02:48.330 Applications Enabled 00:02:48.330 ================= 00:02:48.330 00:02:48.330 apps: 00:02:48.330 00:02:48.330 00:02:48.330 Message: 00:02:48.330 ================= 00:02:48.330 Libraries Enabled 00:02:48.330 ================= 00:02:48.330 00:02:48.330 libs: 00:02:48.330 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.330 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.330 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.330 00:02:48.330 Message: 00:02:48.330 =============== 00:02:48.330 Drivers Enabled 00:02:48.330 =============== 00:02:48.330 00:02:48.330 common: 00:02:48.330 00:02:48.330 bus: 00:02:48.330 pci, vdev, 00:02:48.330 mempool: 00:02:48.330 ring, 00:02:48.330 dma: 00:02:48.330 00:02:48.330 net: 00:02:48.330 00:02:48.330 crypto: 00:02:48.330 00:02:48.330 compress: 00:02:48.330 00:02:48.330 vdpa: 00:02:48.330 00:02:48.330 00:02:48.330 Message: 00:02:48.330 ================= 00:02:48.330 Content Skipped 00:02:48.330 ================= 00:02:48.330 00:02:48.330 apps: 00:02:48.330 dumpcap: explicitly disabled via build config 00:02:48.330 graph: explicitly disabled via build config 00:02:48.330 pdump: explicitly disabled via build config 00:02:48.330 proc-info: explicitly disabled via build config 00:02:48.330 test-acl: explicitly disabled via build config 00:02:48.330 test-bbdev: explicitly disabled via build config 00:02:48.330 test-cmdline: explicitly disabled via build config 00:02:48.330 test-compress-perf: explicitly disabled via build config 00:02:48.330 test-crypto-perf: explicitly disabled via build config 00:02:48.330 test-dma-perf: explicitly disabled via build config 00:02:48.330 test-eventdev: explicitly disabled via build config 00:02:48.330 test-fib: explicitly disabled via build config 00:02:48.330 test-flow-perf: explicitly disabled via build config 00:02:48.330 test-gpudev: explicitly disabled via build config 00:02:48.330 test-mldev: explicitly disabled via build config 00:02:48.330 test-pipeline: explicitly disabled via build config 00:02:48.330 test-pmd: explicitly disabled via build config 00:02:48.330 test-regex: explicitly disabled via build config 00:02:48.330 test-sad: explicitly disabled via build config 00:02:48.330 test-security-perf: explicitly disabled via build config 00:02:48.330 00:02:48.330 libs: 00:02:48.330 argparse: explicitly disabled via build config 00:02:48.330 metrics: explicitly disabled via build config 00:02:48.330 acl: explicitly disabled via build config 00:02:48.330 bbdev: explicitly disabled via build config 00:02:48.330 bitratestats: explicitly disabled via build config 00:02:48.330 bpf: explicitly disabled via build config 00:02:48.330 cfgfile: explicitly disabled via build config 00:02:48.330 distributor: explicitly disabled via build config 00:02:48.330 efd: explicitly disabled via build config 00:02:48.330 eventdev: explicitly disabled via build config 00:02:48.330 dispatcher: explicitly disabled via build config 00:02:48.330 gpudev: explicitly disabled via build config 00:02:48.330 gro: explicitly disabled via build config 00:02:48.330 gso: explicitly disabled via build config 00:02:48.330 ip_frag: explicitly disabled via build config 00:02:48.330 jobstats: explicitly disabled via build config 00:02:48.330 latencystats: explicitly disabled via build config 00:02:48.330 lpm: explicitly disabled via build config 00:02:48.330 member: explicitly disabled via build config 00:02:48.330 pcapng: explicitly disabled via build config 00:02:48.330 rawdev: explicitly disabled via build config 00:02:48.330 regexdev: explicitly disabled via build config 00:02:48.330 mldev: explicitly disabled via build config 00:02:48.330 rib: explicitly disabled via build config 00:02:48.330 sched: explicitly disabled via build config 00:02:48.330 stack: explicitly disabled via build config 00:02:48.330 ipsec: explicitly disabled via build config 00:02:48.330 pdcp: explicitly disabled via build config 00:02:48.330 fib: explicitly disabled via build config 00:02:48.330 port: explicitly disabled via build config 00:02:48.330 pdump: explicitly disabled via build config 00:02:48.330 table: explicitly disabled via build config 00:02:48.330 pipeline: explicitly disabled via build config 00:02:48.330 graph: explicitly disabled via build config 00:02:48.330 node: explicitly disabled via build config 00:02:48.330 00:02:48.330 drivers: 00:02:48.330 common/cpt: not in enabled drivers build config 00:02:48.330 common/dpaax: not in enabled drivers build config 00:02:48.330 common/iavf: not in enabled drivers build config 00:02:48.330 common/idpf: not in enabled drivers build config 00:02:48.330 common/ionic: not in enabled drivers build config 00:02:48.330 common/mvep: not in enabled drivers build config 00:02:48.330 common/octeontx: not in enabled drivers build config 00:02:48.330 bus/auxiliary: not in enabled drivers build config 00:02:48.330 bus/cdx: not in enabled drivers build config 00:02:48.330 bus/dpaa: not in enabled drivers build config 00:02:48.330 bus/fslmc: not in enabled drivers build config 00:02:48.330 bus/ifpga: not in enabled drivers build config 00:02:48.330 bus/platform: not in enabled drivers build config 00:02:48.330 bus/uacce: not in enabled drivers build config 00:02:48.330 bus/vmbus: not in enabled drivers build config 00:02:48.330 common/cnxk: not in enabled drivers build config 00:02:48.330 common/mlx5: not in enabled drivers build config 00:02:48.330 common/nfp: not in enabled drivers build config 00:02:48.330 common/nitrox: not in enabled drivers build config 00:02:48.330 common/qat: not in enabled drivers build config 00:02:48.330 common/sfc_efx: not in enabled drivers build config 00:02:48.330 mempool/bucket: not in enabled drivers build config 00:02:48.330 mempool/cnxk: not in enabled drivers build config 00:02:48.330 mempool/dpaa: not in enabled drivers build config 00:02:48.330 mempool/dpaa2: not in enabled drivers build config 00:02:48.330 mempool/octeontx: not in enabled drivers build config 00:02:48.330 mempool/stack: not in enabled drivers build config 00:02:48.330 dma/cnxk: not in enabled drivers build config 00:02:48.330 dma/dpaa: not in enabled drivers build config 00:02:48.330 dma/dpaa2: not in enabled drivers build config 00:02:48.330 dma/hisilicon: not in enabled drivers build config 00:02:48.330 dma/idxd: not in enabled drivers build config 00:02:48.330 dma/ioat: not in enabled drivers build config 00:02:48.330 dma/skeleton: not in enabled drivers build config 00:02:48.330 net/af_packet: not in enabled drivers build config 00:02:48.330 net/af_xdp: not in enabled drivers build config 00:02:48.330 net/ark: not in enabled drivers build config 00:02:48.330 net/atlantic: not in enabled drivers build config 00:02:48.330 net/avp: not in enabled drivers build config 00:02:48.330 net/axgbe: not in enabled drivers build config 00:02:48.330 net/bnx2x: not in enabled drivers build config 00:02:48.330 net/bnxt: not in enabled drivers build config 00:02:48.330 net/bonding: not in enabled drivers build config 00:02:48.330 net/cnxk: not in enabled drivers build config 00:02:48.330 net/cpfl: not in enabled drivers build config 00:02:48.330 net/cxgbe: not in enabled drivers build config 00:02:48.330 net/dpaa: not in enabled drivers build config 00:02:48.330 net/dpaa2: not in enabled drivers build config 00:02:48.330 net/e1000: not in enabled drivers build config 00:02:48.330 net/ena: not in enabled drivers build config 00:02:48.330 net/enetc: not in enabled drivers build config 00:02:48.330 net/enetfec: not in enabled drivers build config 00:02:48.330 net/enic: not in enabled drivers build config 00:02:48.330 net/failsafe: not in enabled drivers build config 00:02:48.330 net/fm10k: not in enabled drivers build config 00:02:48.330 net/gve: not in enabled drivers build config 00:02:48.330 net/hinic: not in enabled drivers build config 00:02:48.330 net/hns3: not in enabled drivers build config 00:02:48.330 net/i40e: not in enabled drivers build config 00:02:48.330 net/iavf: not in enabled drivers build config 00:02:48.330 net/ice: not in enabled drivers build config 00:02:48.330 net/idpf: not in enabled drivers build config 00:02:48.330 net/igc: not in enabled drivers build config 00:02:48.330 net/ionic: not in enabled drivers build config 00:02:48.330 net/ipn3ke: not in enabled drivers build config 00:02:48.330 net/ixgbe: not in enabled drivers build config 00:02:48.330 net/mana: not in enabled drivers build config 00:02:48.330 net/memif: not in enabled drivers build config 00:02:48.330 net/mlx4: not in enabled drivers build config 00:02:48.330 net/mlx5: not in enabled drivers build config 00:02:48.330 net/mvneta: not in enabled drivers build config 00:02:48.330 net/mvpp2: not in enabled drivers build config 00:02:48.330 net/netvsc: not in enabled drivers build config 00:02:48.330 net/nfb: not in enabled drivers build config 00:02:48.330 net/nfp: not in enabled drivers build config 00:02:48.330 net/ngbe: not in enabled drivers build config 00:02:48.330 net/null: not in enabled drivers build config 00:02:48.330 net/octeontx: not in enabled drivers build config 00:02:48.330 net/octeon_ep: not in enabled drivers build config 00:02:48.330 net/pcap: not in enabled drivers build config 00:02:48.330 net/pfe: not in enabled drivers build config 00:02:48.330 net/qede: not in enabled drivers build config 00:02:48.330 net/ring: not in enabled drivers build config 00:02:48.330 net/sfc: not in enabled drivers build config 00:02:48.330 net/softnic: not in enabled drivers build config 00:02:48.330 net/tap: not in enabled drivers build config 00:02:48.330 net/thunderx: not in enabled drivers build config 00:02:48.330 net/txgbe: not in enabled drivers build config 00:02:48.330 net/vdev_netvsc: not in enabled drivers build config 00:02:48.330 net/vhost: not in enabled drivers build config 00:02:48.330 net/virtio: not in enabled drivers build config 00:02:48.330 net/vmxnet3: not in enabled drivers build config 00:02:48.330 raw/*: missing internal dependency, "rawdev" 00:02:48.330 crypto/armv8: not in enabled drivers build config 00:02:48.330 crypto/bcmfs: not in enabled drivers build config 00:02:48.330 crypto/caam_jr: not in enabled drivers build config 00:02:48.330 crypto/ccp: not in enabled drivers build config 00:02:48.330 crypto/cnxk: not in enabled drivers build config 00:02:48.330 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.330 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.330 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.330 crypto/mlx5: not in enabled drivers build config 00:02:48.330 crypto/mvsam: not in enabled drivers build config 00:02:48.330 crypto/nitrox: not in enabled drivers build config 00:02:48.330 crypto/null: not in enabled drivers build config 00:02:48.330 crypto/octeontx: not in enabled drivers build config 00:02:48.330 crypto/openssl: not in enabled drivers build config 00:02:48.330 crypto/scheduler: not in enabled drivers build config 00:02:48.330 crypto/uadk: not in enabled drivers build config 00:02:48.330 crypto/virtio: not in enabled drivers build config 00:02:48.330 compress/isal: not in enabled drivers build config 00:02:48.330 compress/mlx5: not in enabled drivers build config 00:02:48.330 compress/nitrox: not in enabled drivers build config 00:02:48.330 compress/octeontx: not in enabled drivers build config 00:02:48.330 compress/zlib: not in enabled drivers build config 00:02:48.330 regex/*: missing internal dependency, "regexdev" 00:02:48.330 ml/*: missing internal dependency, "mldev" 00:02:48.330 vdpa/ifc: not in enabled drivers build config 00:02:48.330 vdpa/mlx5: not in enabled drivers build config 00:02:48.330 vdpa/nfp: not in enabled drivers build config 00:02:48.330 vdpa/sfc: not in enabled drivers build config 00:02:48.330 event/*: missing internal dependency, "eventdev" 00:02:48.330 baseband/*: missing internal dependency, "bbdev" 00:02:48.330 gpu/*: missing internal dependency, "gpudev" 00:02:48.330 00:02:48.330 00:02:48.330 Build targets in project: 85 00:02:48.330 00:02:48.330 DPDK 24.03.0 00:02:48.330 00:02:48.330 User defined options 00:02:48.330 buildtype : debug 00:02:48.330 default_library : shared 00:02:48.330 libdir : lib 00:02:48.330 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:48.330 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:48.330 c_link_args : 00:02:48.330 cpu_instruction_set: native 00:02:48.331 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:48.331 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:48.331 enable_docs : false 00:02:48.331 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:48.331 enable_kmods : false 00:02:48.331 max_lcores : 128 00:02:48.331 tests : false 00:02:48.331 00:02:48.331 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.903 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:48.903 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:48.903 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:48.903 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:48.903 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:48.903 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:48.903 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:48.904 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:48.904 [8/268] Linking static target lib/librte_kvargs.a 00:02:48.904 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:48.904 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.904 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.904 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:48.904 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:48.904 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:49.163 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:49.163 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:49.163 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.163 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:49.163 [19/268] Linking static target lib/librte_log.a 00:02:49.163 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.163 [21/268] Linking static target lib/librte_pci.a 00:02:49.163 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.163 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.163 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.424 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:49.424 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:49.424 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:49.424 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.424 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:49.424 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.424 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.424 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:49.424 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:49.424 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:49.424 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:49.424 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.424 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:49.424 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:49.424 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.424 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:49.424 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:49.424 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:49.424 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.424 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.424 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:49.424 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:49.424 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.424 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:49.424 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:49.424 [50/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.424 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.424 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.424 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.424 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.424 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.424 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.424 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:49.424 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:49.424 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.424 [60/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.424 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:49.424 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.424 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.424 [64/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.424 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:49.424 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:49.424 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:49.424 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.424 [69/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.424 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:49.424 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.424 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.424 [73/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:49.424 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:49.424 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:49.424 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.424 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.424 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:49.424 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:49.424 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:49.424 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.424 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:49.424 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:49.424 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.424 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.424 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:49.424 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:49.424 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.424 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.424 [90/268] Linking static target lib/librte_ring.a 00:02:49.683 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:49.683 [92/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.683 [93/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.683 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.683 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.683 [96/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.683 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.683 [98/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:49.683 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.683 [100/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.683 [101/268] Linking static target lib/librte_net.a 00:02:49.683 [102/268] Linking static target lib/librte_telemetry.a 00:02:49.683 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.683 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:49.683 [105/268] Linking static target lib/librte_meter.a 00:02:49.683 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.683 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.683 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:49.683 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.683 [110/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.683 [111/268] Linking static target lib/librte_rcu.a 00:02:49.683 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.683 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.683 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.683 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.683 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.683 [117/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.683 [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.683 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.683 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.683 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.683 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.683 [123/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.683 [124/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.683 [125/268] Linking static target lib/librte_mempool.a 00:02:49.683 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:49.683 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.683 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:49.683 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:49.683 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.683 [131/268] Linking static target lib/librte_eal.a 00:02:49.683 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.683 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:49.683 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.683 [135/268] Linking static target lib/librte_cmdline.a 00:02:49.683 [136/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.683 [137/268] Linking static target lib/librte_mbuf.a 00:02:49.683 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.683 [139/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.942 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:49.942 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.942 [142/268] Linking target lib/librte_log.so.24.1 00:02:49.942 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.942 [144/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.942 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:49.942 [146/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.942 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.942 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:49.942 [149/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.942 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:49.942 [151/268] Linking static target lib/librte_compressdev.a 00:02:49.942 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.942 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.942 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.942 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.942 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.942 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.942 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:49.942 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.942 [160/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.942 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.942 [162/268] Linking static target lib/librte_timer.a 00:02:49.942 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:49.942 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.942 [165/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.942 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:49.942 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:49.942 [168/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.942 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.942 [170/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.942 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.942 [172/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.942 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.942 [174/268] Linking target lib/librte_kvargs.so.24.1 00:02:49.942 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.942 [176/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.942 [177/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:49.942 [178/268] Linking target lib/librte_telemetry.so.24.1 00:02:49.942 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.942 [180/268] Linking static target lib/librte_reorder.a 00:02:49.942 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.201 [182/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:50.201 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.201 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.201 [185/268] Linking static target lib/librte_dmadev.a 00:02:50.201 [186/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.201 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:50.201 [188/268] Linking static target lib/librte_power.a 00:02:50.201 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.201 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:50.201 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:50.201 [192/268] Linking static target lib/librte_security.a 00:02:50.201 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.201 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:50.201 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.201 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.201 [197/268] Linking static target drivers/librte_bus_vdev.a 00:02:50.201 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.201 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.201 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.201 [201/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.201 [202/268] Linking static target lib/librte_hash.a 00:02:50.201 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.201 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.201 [205/268] Linking static target drivers/librte_bus_pci.a 00:02:50.201 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.459 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.459 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.459 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.459 [210/268] Linking static target drivers/librte_mempool_ring.a 00:02:50.459 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.459 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.459 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:50.459 [214/268] Linking static target lib/librte_cryptodev.a 00:02:50.459 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.459 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.459 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.719 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.719 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.719 [220/268] Linking static target lib/librte_ethdev.a 00:02:50.719 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.719 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.978 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.978 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.978 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.978 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.237 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.804 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.804 [229/268] Linking static target lib/librte_vhost.a 00:02:52.372 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.749 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.020 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.955 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.956 [234/268] Linking target lib/librte_eal.so.24.1 00:02:59.956 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:59.956 [236/268] Linking target lib/librte_ring.so.24.1 00:02:59.956 [237/268] Linking target lib/librte_meter.so.24.1 00:02:59.956 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:59.956 [239/268] Linking target lib/librte_timer.so.24.1 00:02:59.956 [240/268] Linking target lib/librte_pci.so.24.1 00:02:59.956 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:00.216 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:00.216 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:00.216 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:00.216 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:00.216 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:00.216 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:00.216 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:00.216 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:00.216 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:00.216 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:00.216 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:00.216 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:00.475 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:00.475 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:00.475 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:00.475 [257/268] Linking target lib/librte_net.so.24.1 00:03:00.475 [258/268] Linking target lib/librte_reorder.so.24.1 00:03:00.734 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:00.734 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:00.734 [261/268] Linking target lib/librte_security.so.24.1 00:03:00.734 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:00.734 [263/268] Linking target lib/librte_hash.so.24.1 00:03:00.734 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:00.734 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:00.734 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:00.994 [267/268] Linking target lib/librte_vhost.so.24.1 00:03:00.994 [268/268] Linking target lib/librte_power.so.24.1 00:03:00.994 INFO: autodetecting backend as ninja 00:03:00.994 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:10.978 CC lib/log/log.o 00:03:10.978 CC lib/log/log_flags.o 00:03:10.978 CC lib/log/log_deprecated.o 00:03:10.978 CC lib/ut/ut.o 00:03:10.978 CC lib/ut_mock/mock.o 00:03:11.236 LIB libspdk_log.a 00:03:11.236 LIB libspdk_ut.a 00:03:11.236 LIB libspdk_ut_mock.a 00:03:11.236 SO libspdk_ut.so.2.0 00:03:11.236 SO libspdk_ut_mock.so.6.0 00:03:11.236 SO libspdk_log.so.7.1 00:03:11.236 SYMLINK libspdk_ut_mock.so 00:03:11.496 SYMLINK libspdk_ut.so 00:03:11.496 SYMLINK libspdk_log.so 00:03:11.755 CC lib/dma/dma.o 00:03:11.755 CC lib/ioat/ioat.o 00:03:11.755 CC lib/util/base64.o 00:03:11.755 CC lib/util/bit_array.o 00:03:11.755 CC lib/util/cpuset.o 00:03:11.755 CC lib/util/crc16.o 00:03:11.755 CC lib/util/crc32.o 00:03:11.755 CC lib/util/crc32c.o 00:03:11.755 CC lib/util/crc32_ieee.o 00:03:11.755 CC lib/util/crc64.o 00:03:11.755 CC lib/util/dif.o 00:03:11.755 CXX lib/trace_parser/trace.o 00:03:11.755 CC lib/util/fd.o 00:03:11.755 CC lib/util/fd_group.o 00:03:11.755 CC lib/util/file.o 00:03:11.755 CC lib/util/hexlify.o 00:03:11.755 CC lib/util/iov.o 00:03:11.755 CC lib/util/math.o 00:03:11.755 CC lib/util/net.o 00:03:11.755 CC lib/util/pipe.o 00:03:11.755 CC lib/util/strerror_tls.o 00:03:11.755 CC lib/util/string.o 00:03:11.755 CC lib/util/uuid.o 00:03:11.755 CC lib/util/xor.o 00:03:11.755 CC lib/util/zipf.o 00:03:11.755 CC lib/util/md5.o 00:03:11.755 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.755 CC lib/vfio_user/host/vfio_user.o 00:03:12.013 LIB libspdk_dma.a 00:03:12.013 SO libspdk_dma.so.5.0 00:03:12.013 LIB libspdk_ioat.a 00:03:12.013 SYMLINK libspdk_dma.so 00:03:12.013 SO libspdk_ioat.so.7.0 00:03:12.013 SYMLINK libspdk_ioat.so 00:03:12.013 LIB libspdk_vfio_user.a 00:03:12.013 SO libspdk_vfio_user.so.5.0 00:03:12.271 LIB libspdk_util.a 00:03:12.271 SYMLINK libspdk_vfio_user.so 00:03:12.271 SO libspdk_util.so.10.1 00:03:12.271 SYMLINK libspdk_util.so 00:03:12.530 LIB libspdk_trace_parser.a 00:03:12.530 SO libspdk_trace_parser.so.6.0 00:03:12.530 SYMLINK libspdk_trace_parser.so 00:03:12.530 CC lib/idxd/idxd.o 00:03:12.530 CC lib/idxd/idxd_user.o 00:03:12.530 CC lib/env_dpdk/env.o 00:03:12.530 CC lib/idxd/idxd_kernel.o 00:03:12.530 CC lib/rdma_utils/rdma_utils.o 00:03:12.788 CC lib/env_dpdk/memory.o 00:03:12.788 CC lib/env_dpdk/pci.o 00:03:12.788 CC lib/env_dpdk/init.o 00:03:12.788 CC lib/json/json_parse.o 00:03:12.788 CC lib/vmd/vmd.o 00:03:12.788 CC lib/json/json_util.o 00:03:12.788 CC lib/env_dpdk/threads.o 00:03:12.788 CC lib/conf/conf.o 00:03:12.788 CC lib/env_dpdk/pci_ioat.o 00:03:12.788 CC lib/vmd/led.o 00:03:12.788 CC lib/env_dpdk/pci_virtio.o 00:03:12.788 CC lib/json/json_write.o 00:03:12.788 CC lib/env_dpdk/pci_vmd.o 00:03:12.788 CC lib/env_dpdk/pci_idxd.o 00:03:12.788 CC lib/env_dpdk/pci_event.o 00:03:12.788 CC lib/env_dpdk/sigbus_handler.o 00:03:12.788 CC lib/env_dpdk/pci_dpdk.o 00:03:12.788 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.788 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.788 LIB libspdk_conf.a 00:03:13.047 LIB libspdk_rdma_utils.a 00:03:13.047 SO libspdk_conf.so.6.0 00:03:13.047 LIB libspdk_json.a 00:03:13.047 SO libspdk_rdma_utils.so.1.0 00:03:13.047 SYMLINK libspdk_conf.so 00:03:13.047 SO libspdk_json.so.6.0 00:03:13.047 SYMLINK libspdk_rdma_utils.so 00:03:13.047 SYMLINK libspdk_json.so 00:03:13.047 LIB libspdk_idxd.a 00:03:13.305 SO libspdk_idxd.so.12.1 00:03:13.305 LIB libspdk_vmd.a 00:03:13.305 SO libspdk_vmd.so.6.0 00:03:13.305 SYMLINK libspdk_idxd.so 00:03:13.305 SYMLINK libspdk_vmd.so 00:03:13.305 CC lib/rdma_provider/common.o 00:03:13.305 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:13.305 CC lib/jsonrpc/jsonrpc_server.o 00:03:13.305 CC lib/jsonrpc/jsonrpc_client.o 00:03:13.305 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:13.305 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:13.564 LIB libspdk_rdma_provider.a 00:03:13.564 SO libspdk_rdma_provider.so.7.0 00:03:13.564 LIB libspdk_jsonrpc.a 00:03:13.564 SO libspdk_jsonrpc.so.6.0 00:03:13.564 SYMLINK libspdk_rdma_provider.so 00:03:13.564 SYMLINK libspdk_jsonrpc.so 00:03:13.823 LIB libspdk_env_dpdk.a 00:03:13.823 SO libspdk_env_dpdk.so.15.1 00:03:13.823 SYMLINK libspdk_env_dpdk.so 00:03:14.082 CC lib/rpc/rpc.o 00:03:14.082 LIB libspdk_rpc.a 00:03:14.082 SO libspdk_rpc.so.6.0 00:03:14.341 SYMLINK libspdk_rpc.so 00:03:14.600 CC lib/trace/trace.o 00:03:14.600 CC lib/trace/trace_flags.o 00:03:14.600 CC lib/trace/trace_rpc.o 00:03:14.600 CC lib/keyring/keyring.o 00:03:14.600 CC lib/keyring/keyring_rpc.o 00:03:14.600 CC lib/notify/notify.o 00:03:14.600 CC lib/notify/notify_rpc.o 00:03:14.600 LIB libspdk_notify.a 00:03:14.859 SO libspdk_notify.so.6.0 00:03:14.859 LIB libspdk_keyring.a 00:03:14.859 LIB libspdk_trace.a 00:03:14.859 SO libspdk_keyring.so.2.0 00:03:14.859 SO libspdk_trace.so.11.0 00:03:14.859 SYMLINK libspdk_notify.so 00:03:14.859 SYMLINK libspdk_keyring.so 00:03:14.859 SYMLINK libspdk_trace.so 00:03:15.118 CC lib/thread/thread.o 00:03:15.118 CC lib/sock/sock.o 00:03:15.118 CC lib/thread/iobuf.o 00:03:15.118 CC lib/sock/sock_rpc.o 00:03:15.686 LIB libspdk_sock.a 00:03:15.686 SO libspdk_sock.so.10.0 00:03:15.686 SYMLINK libspdk_sock.so 00:03:15.945 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:15.945 CC lib/nvme/nvme_ctrlr.o 00:03:15.946 CC lib/nvme/nvme_fabric.o 00:03:15.946 CC lib/nvme/nvme_ns_cmd.o 00:03:15.946 CC lib/nvme/nvme_pcie_common.o 00:03:15.946 CC lib/nvme/nvme_ns.o 00:03:15.946 CC lib/nvme/nvme_pcie.o 00:03:15.946 CC lib/nvme/nvme_qpair.o 00:03:15.946 CC lib/nvme/nvme_quirks.o 00:03:15.946 CC lib/nvme/nvme.o 00:03:15.946 CC lib/nvme/nvme_transport.o 00:03:15.946 CC lib/nvme/nvme_discovery.o 00:03:15.946 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:15.946 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:15.946 CC lib/nvme/nvme_tcp.o 00:03:15.946 CC lib/nvme/nvme_opal.o 00:03:15.946 CC lib/nvme/nvme_io_msg.o 00:03:15.946 CC lib/nvme/nvme_poll_group.o 00:03:15.946 CC lib/nvme/nvme_zns.o 00:03:15.946 CC lib/nvme/nvme_stubs.o 00:03:15.946 CC lib/nvme/nvme_auth.o 00:03:15.946 CC lib/nvme/nvme_cuse.o 00:03:15.946 CC lib/nvme/nvme_vfio_user.o 00:03:15.946 CC lib/nvme/nvme_rdma.o 00:03:16.204 LIB libspdk_thread.a 00:03:16.204 SO libspdk_thread.so.11.0 00:03:16.463 SYMLINK libspdk_thread.so 00:03:16.721 CC lib/blob/blobstore.o 00:03:16.721 CC lib/blob/request.o 00:03:16.721 CC lib/blob/zeroes.o 00:03:16.721 CC lib/blob/blob_bs_dev.o 00:03:16.721 CC lib/init/json_config.o 00:03:16.721 CC lib/fsdev/fsdev_rpc.o 00:03:16.721 CC lib/fsdev/fsdev.o 00:03:16.721 CC lib/init/subsystem.o 00:03:16.721 CC lib/fsdev/fsdev_io.o 00:03:16.721 CC lib/init/subsystem_rpc.o 00:03:16.721 CC lib/init/rpc.o 00:03:16.721 CC lib/accel/accel.o 00:03:16.721 CC lib/accel/accel_rpc.o 00:03:16.721 CC lib/virtio/virtio_vfio_user.o 00:03:16.721 CC lib/virtio/virtio.o 00:03:16.721 CC lib/virtio/virtio_vhost_user.o 00:03:16.721 CC lib/accel/accel_sw.o 00:03:16.721 CC lib/virtio/virtio_pci.o 00:03:16.721 CC lib/vfu_tgt/tgt_endpoint.o 00:03:16.721 CC lib/vfu_tgt/tgt_rpc.o 00:03:16.980 LIB libspdk_init.a 00:03:16.980 SO libspdk_init.so.6.0 00:03:16.980 LIB libspdk_vfu_tgt.a 00:03:16.980 LIB libspdk_virtio.a 00:03:16.980 SO libspdk_vfu_tgt.so.3.0 00:03:16.980 SYMLINK libspdk_init.so 00:03:16.980 SO libspdk_virtio.so.7.0 00:03:16.980 SYMLINK libspdk_vfu_tgt.so 00:03:16.980 SYMLINK libspdk_virtio.so 00:03:17.239 LIB libspdk_fsdev.a 00:03:17.239 SO libspdk_fsdev.so.2.0 00:03:17.239 CC lib/event/app.o 00:03:17.239 CC lib/event/reactor.o 00:03:17.239 CC lib/event/log_rpc.o 00:03:17.239 CC lib/event/app_rpc.o 00:03:17.239 CC lib/event/scheduler_static.o 00:03:17.239 SYMLINK libspdk_fsdev.so 00:03:17.497 LIB libspdk_accel.a 00:03:17.497 SO libspdk_accel.so.16.0 00:03:17.497 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:17.497 LIB libspdk_nvme.a 00:03:17.497 LIB libspdk_event.a 00:03:17.497 SYMLINK libspdk_accel.so 00:03:17.755 SO libspdk_event.so.14.0 00:03:17.755 SO libspdk_nvme.so.15.0 00:03:17.755 SYMLINK libspdk_event.so 00:03:18.014 SYMLINK libspdk_nvme.so 00:03:18.014 CC lib/bdev/bdev.o 00:03:18.014 CC lib/bdev/bdev_rpc.o 00:03:18.014 CC lib/bdev/bdev_zone.o 00:03:18.014 CC lib/bdev/part.o 00:03:18.014 CC lib/bdev/scsi_nvme.o 00:03:18.014 LIB libspdk_fuse_dispatcher.a 00:03:18.014 SO libspdk_fuse_dispatcher.so.1.0 00:03:18.272 SYMLINK libspdk_fuse_dispatcher.so 00:03:18.839 LIB libspdk_blob.a 00:03:18.839 SO libspdk_blob.so.11.0 00:03:18.839 SYMLINK libspdk_blob.so 00:03:19.405 CC lib/blobfs/blobfs.o 00:03:19.405 CC lib/blobfs/tree.o 00:03:19.405 CC lib/lvol/lvol.o 00:03:19.662 LIB libspdk_bdev.a 00:03:19.662 SO libspdk_bdev.so.17.0 00:03:19.920 LIB libspdk_blobfs.a 00:03:19.920 SO libspdk_blobfs.so.10.0 00:03:19.920 SYMLINK libspdk_bdev.so 00:03:19.920 LIB libspdk_lvol.a 00:03:19.920 SYMLINK libspdk_blobfs.so 00:03:19.920 SO libspdk_lvol.so.10.0 00:03:19.920 SYMLINK libspdk_lvol.so 00:03:20.180 CC lib/ftl/ftl_core.o 00:03:20.180 CC lib/ftl/ftl_init.o 00:03:20.180 CC lib/nvmf/ctrlr.o 00:03:20.180 CC lib/scsi/dev.o 00:03:20.180 CC lib/nvmf/ctrlr_discovery.o 00:03:20.180 CC lib/ublk/ublk.o 00:03:20.180 CC lib/ftl/ftl_layout.o 00:03:20.180 CC lib/scsi/lun.o 00:03:20.180 CC lib/ftl/ftl_debug.o 00:03:20.180 CC lib/nvmf/subsystem.o 00:03:20.180 CC lib/ublk/ublk_rpc.o 00:03:20.180 CC lib/nvmf/ctrlr_bdev.o 00:03:20.180 CC lib/ftl/ftl_io.o 00:03:20.180 CC lib/nbd/nbd.o 00:03:20.180 CC lib/scsi/port.o 00:03:20.180 CC lib/ftl/ftl_sb.o 00:03:20.180 CC lib/nbd/nbd_rpc.o 00:03:20.180 CC lib/nvmf/nvmf.o 00:03:20.180 CC lib/scsi/scsi.o 00:03:20.180 CC lib/ftl/ftl_l2p.o 00:03:20.180 CC lib/nvmf/nvmf_rpc.o 00:03:20.180 CC lib/scsi/scsi_bdev.o 00:03:20.180 CC lib/nvmf/transport.o 00:03:20.180 CC lib/ftl/ftl_l2p_flat.o 00:03:20.180 CC lib/scsi/scsi_pr.o 00:03:20.180 CC lib/nvmf/mdns_server.o 00:03:20.180 CC lib/nvmf/tcp.o 00:03:20.180 CC lib/nvmf/stubs.o 00:03:20.180 CC lib/scsi/scsi_rpc.o 00:03:20.180 CC lib/ftl/ftl_nv_cache.o 00:03:20.180 CC lib/scsi/task.o 00:03:20.180 CC lib/ftl/ftl_band.o 00:03:20.180 CC lib/nvmf/vfio_user.o 00:03:20.180 CC lib/nvmf/rdma.o 00:03:20.180 CC lib/ftl/ftl_band_ops.o 00:03:20.180 CC lib/ftl/ftl_writer.o 00:03:20.180 CC lib/nvmf/auth.o 00:03:20.180 CC lib/ftl/ftl_reloc.o 00:03:20.180 CC lib/ftl/ftl_rq.o 00:03:20.180 CC lib/ftl/ftl_l2p_cache.o 00:03:20.180 CC lib/ftl/ftl_p2l.o 00:03:20.180 CC lib/ftl/ftl_p2l_log.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:20.180 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:20.180 CC lib/ftl/utils/ftl_md.o 00:03:20.180 CC lib/ftl/utils/ftl_conf.o 00:03:20.180 CC lib/ftl/utils/ftl_mempool.o 00:03:20.180 CC lib/ftl/utils/ftl_bitmap.o 00:03:20.180 CC lib/ftl/utils/ftl_property.o 00:03:20.180 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:20.180 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:20.180 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:20.180 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:20.180 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:20.180 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:20.180 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:20.180 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:20.180 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:20.180 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:20.180 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:20.180 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:20.180 CC lib/ftl/base/ftl_base_dev.o 00:03:20.180 CC lib/ftl/base/ftl_base_bdev.o 00:03:20.180 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:20.180 CC lib/ftl/ftl_trace.o 00:03:20.747 LIB libspdk_scsi.a 00:03:20.747 LIB libspdk_nbd.a 00:03:20.747 SO libspdk_scsi.so.9.0 00:03:20.747 SO libspdk_nbd.so.7.0 00:03:20.747 LIB libspdk_ublk.a 00:03:20.747 SYMLINK libspdk_nbd.so 00:03:20.747 SYMLINK libspdk_scsi.so 00:03:21.005 SO libspdk_ublk.so.3.0 00:03:21.005 SYMLINK libspdk_ublk.so 00:03:21.264 CC lib/vhost/vhost.o 00:03:21.264 CC lib/vhost/vhost_rpc.o 00:03:21.264 CC lib/vhost/vhost_scsi.o 00:03:21.264 CC lib/vhost/vhost_blk.o 00:03:21.264 CC lib/vhost/rte_vhost_user.o 00:03:21.264 CC lib/iscsi/conn.o 00:03:21.264 CC lib/iscsi/init_grp.o 00:03:21.264 CC lib/iscsi/iscsi.o 00:03:21.264 CC lib/iscsi/param.o 00:03:21.264 CC lib/iscsi/portal_grp.o 00:03:21.264 CC lib/iscsi/tgt_node.o 00:03:21.264 CC lib/iscsi/iscsi_subsystem.o 00:03:21.264 CC lib/iscsi/iscsi_rpc.o 00:03:21.264 CC lib/iscsi/task.o 00:03:21.264 LIB libspdk_ftl.a 00:03:21.523 SO libspdk_ftl.so.9.0 00:03:21.523 SYMLINK libspdk_ftl.so 00:03:22.090 LIB libspdk_nvmf.a 00:03:22.090 LIB libspdk_vhost.a 00:03:22.090 SO libspdk_nvmf.so.20.0 00:03:22.090 SO libspdk_vhost.so.8.0 00:03:22.090 SYMLINK libspdk_vhost.so 00:03:22.090 SYMLINK libspdk_nvmf.so 00:03:22.090 LIB libspdk_iscsi.a 00:03:22.349 SO libspdk_iscsi.so.8.0 00:03:22.349 SYMLINK libspdk_iscsi.so 00:03:22.917 CC module/vfu_device/vfu_virtio.o 00:03:22.917 CC module/vfu_device/vfu_virtio_blk.o 00:03:22.917 CC module/vfu_device/vfu_virtio_rpc.o 00:03:22.917 CC module/env_dpdk/env_dpdk_rpc.o 00:03:22.917 CC module/vfu_device/vfu_virtio_scsi.o 00:03:22.917 CC module/vfu_device/vfu_virtio_fs.o 00:03:22.917 LIB libspdk_env_dpdk_rpc.a 00:03:22.917 CC module/sock/posix/posix.o 00:03:22.917 CC module/scheduler/gscheduler/gscheduler.o 00:03:22.917 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.917 CC module/blob/bdev/blob_bdev.o 00:03:22.917 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:22.917 CC module/keyring/linux/keyring.o 00:03:22.917 CC module/keyring/linux/keyring_rpc.o 00:03:22.917 CC module/accel/error/accel_error.o 00:03:22.917 CC module/accel/iaa/accel_iaa.o 00:03:22.917 CC module/accel/error/accel_error_rpc.o 00:03:22.917 SO libspdk_env_dpdk_rpc.so.6.0 00:03:22.917 CC module/accel/iaa/accel_iaa_rpc.o 00:03:22.917 CC module/accel/ioat/accel_ioat_rpc.o 00:03:22.917 CC module/keyring/file/keyring.o 00:03:22.917 CC module/accel/ioat/accel_ioat.o 00:03:22.917 CC module/keyring/file/keyring_rpc.o 00:03:22.917 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:22.917 CC module/fsdev/aio/fsdev_aio.o 00:03:22.917 CC module/accel/dsa/accel_dsa.o 00:03:23.174 CC module/accel/dsa/accel_dsa_rpc.o 00:03:23.174 CC module/fsdev/aio/linux_aio_mgr.o 00:03:23.174 SYMLINK libspdk_env_dpdk_rpc.so 00:03:23.174 LIB libspdk_scheduler_gscheduler.a 00:03:23.174 LIB libspdk_keyring_linux.a 00:03:23.174 LIB libspdk_scheduler_dpdk_governor.a 00:03:23.174 LIB libspdk_keyring_file.a 00:03:23.174 SO libspdk_scheduler_gscheduler.so.4.0 00:03:23.174 LIB libspdk_scheduler_dynamic.a 00:03:23.174 SO libspdk_keyring_linux.so.1.0 00:03:23.174 LIB libspdk_accel_iaa.a 00:03:23.174 LIB libspdk_accel_ioat.a 00:03:23.174 LIB libspdk_accel_error.a 00:03:23.174 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:23.174 SO libspdk_scheduler_dynamic.so.4.0 00:03:23.174 SO libspdk_keyring_file.so.2.0 00:03:23.174 SO libspdk_accel_iaa.so.3.0 00:03:23.174 SO libspdk_accel_error.so.2.0 00:03:23.174 SO libspdk_accel_ioat.so.6.0 00:03:23.174 SYMLINK libspdk_scheduler_gscheduler.so 00:03:23.174 LIB libspdk_blob_bdev.a 00:03:23.433 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:23.433 SYMLINK libspdk_keyring_linux.so 00:03:23.433 SYMLINK libspdk_scheduler_dynamic.so 00:03:23.433 SO libspdk_blob_bdev.so.11.0 00:03:23.433 SYMLINK libspdk_keyring_file.so 00:03:23.433 SYMLINK libspdk_accel_iaa.so 00:03:23.433 SYMLINK libspdk_accel_ioat.so 00:03:23.433 LIB libspdk_accel_dsa.a 00:03:23.433 SYMLINK libspdk_accel_error.so 00:03:23.433 SO libspdk_accel_dsa.so.5.0 00:03:23.433 SYMLINK libspdk_blob_bdev.so 00:03:23.433 LIB libspdk_vfu_device.a 00:03:23.433 SYMLINK libspdk_accel_dsa.so 00:03:23.433 SO libspdk_vfu_device.so.3.0 00:03:23.433 SYMLINK libspdk_vfu_device.so 00:03:23.692 LIB libspdk_fsdev_aio.a 00:03:23.692 LIB libspdk_sock_posix.a 00:03:23.692 SO libspdk_fsdev_aio.so.1.0 00:03:23.692 SO libspdk_sock_posix.so.6.0 00:03:23.692 SYMLINK libspdk_fsdev_aio.so 00:03:23.692 SYMLINK libspdk_sock_posix.so 00:03:23.692 CC module/bdev/passthru/vbdev_passthru.o 00:03:23.692 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:23.692 CC module/bdev/lvol/vbdev_lvol.o 00:03:23.692 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.950 CC module/bdev/delay/vbdev_delay.o 00:03:23.950 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:23.950 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:23.950 CC module/blobfs/bdev/blobfs_bdev.o 00:03:23.950 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:23.950 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:23.950 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:23.950 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:23.950 CC module/bdev/iscsi/bdev_iscsi.o 00:03:23.950 CC module/bdev/aio/bdev_aio_rpc.o 00:03:23.950 CC module/bdev/malloc/bdev_malloc.o 00:03:23.950 CC module/bdev/aio/bdev_aio.o 00:03:23.950 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:23.950 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:23.950 CC module/bdev/ftl/bdev_ftl.o 00:03:23.950 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:23.950 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:23.950 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:23.950 CC module/bdev/nvme/bdev_nvme.o 00:03:23.950 CC module/bdev/nvme/bdev_mdns_client.o 00:03:23.950 CC module/bdev/nvme/nvme_rpc.o 00:03:23.950 CC module/bdev/gpt/gpt.o 00:03:23.950 CC module/bdev/nvme/vbdev_opal.o 00:03:23.950 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:23.950 CC module/bdev/error/vbdev_error.o 00:03:23.950 CC module/bdev/gpt/vbdev_gpt.o 00:03:23.950 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:23.950 CC module/bdev/split/vbdev_split.o 00:03:23.950 CC module/bdev/error/vbdev_error_rpc.o 00:03:23.950 CC module/bdev/raid/bdev_raid.o 00:03:23.950 CC module/bdev/split/vbdev_split_rpc.o 00:03:23.950 CC module/bdev/null/bdev_null.o 00:03:23.950 CC module/bdev/null/bdev_null_rpc.o 00:03:23.950 CC module/bdev/raid/bdev_raid_sb.o 00:03:23.950 CC module/bdev/raid/bdev_raid_rpc.o 00:03:23.950 CC module/bdev/raid/raid0.o 00:03:23.950 CC module/bdev/raid/raid1.o 00:03:23.950 CC module/bdev/raid/concat.o 00:03:23.950 LIB libspdk_blobfs_bdev.a 00:03:24.208 SO libspdk_blobfs_bdev.so.6.0 00:03:24.208 LIB libspdk_bdev_split.a 00:03:24.208 LIB libspdk_bdev_passthru.a 00:03:24.208 SO libspdk_bdev_split.so.6.0 00:03:24.208 SYMLINK libspdk_blobfs_bdev.so 00:03:24.208 LIB libspdk_bdev_error.a 00:03:24.208 LIB libspdk_bdev_null.a 00:03:24.208 LIB libspdk_bdev_zone_block.a 00:03:24.208 SO libspdk_bdev_passthru.so.6.0 00:03:24.208 LIB libspdk_bdev_gpt.a 00:03:24.208 LIB libspdk_bdev_aio.a 00:03:24.208 LIB libspdk_bdev_ftl.a 00:03:24.208 SO libspdk_bdev_error.so.6.0 00:03:24.208 SO libspdk_bdev_null.so.6.0 00:03:24.208 SO libspdk_bdev_zone_block.so.6.0 00:03:24.208 SYMLINK libspdk_bdev_split.so 00:03:24.208 LIB libspdk_bdev_malloc.a 00:03:24.208 SO libspdk_bdev_ftl.so.6.0 00:03:24.208 SO libspdk_bdev_gpt.so.6.0 00:03:24.208 SO libspdk_bdev_aio.so.6.0 00:03:24.208 LIB libspdk_bdev_delay.a 00:03:24.208 SYMLINK libspdk_bdev_passthru.so 00:03:24.208 LIB libspdk_bdev_iscsi.a 00:03:24.208 SO libspdk_bdev_malloc.so.6.0 00:03:24.208 SYMLINK libspdk_bdev_error.so 00:03:24.208 SO libspdk_bdev_delay.so.6.0 00:03:24.208 SYMLINK libspdk_bdev_null.so 00:03:24.208 SYMLINK libspdk_bdev_zone_block.so 00:03:24.208 SYMLINK libspdk_bdev_ftl.so 00:03:24.208 SYMLINK libspdk_bdev_gpt.so 00:03:24.208 SO libspdk_bdev_iscsi.so.6.0 00:03:24.208 SYMLINK libspdk_bdev_aio.so 00:03:24.208 SYMLINK libspdk_bdev_malloc.so 00:03:24.208 SYMLINK libspdk_bdev_delay.so 00:03:24.467 SYMLINK libspdk_bdev_iscsi.so 00:03:24.467 LIB libspdk_bdev_lvol.a 00:03:24.467 LIB libspdk_bdev_virtio.a 00:03:24.467 SO libspdk_bdev_lvol.so.6.0 00:03:24.467 SO libspdk_bdev_virtio.so.6.0 00:03:24.467 SYMLINK libspdk_bdev_lvol.so 00:03:24.467 SYMLINK libspdk_bdev_virtio.so 00:03:24.726 LIB libspdk_bdev_raid.a 00:03:24.726 SO libspdk_bdev_raid.so.6.0 00:03:24.726 SYMLINK libspdk_bdev_raid.so 00:03:25.661 LIB libspdk_bdev_nvme.a 00:03:25.920 SO libspdk_bdev_nvme.so.7.1 00:03:25.920 SYMLINK libspdk_bdev_nvme.so 00:03:26.489 CC module/event/subsystems/iobuf/iobuf.o 00:03:26.489 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:26.489 CC module/event/subsystems/vmd/vmd.o 00:03:26.489 CC module/event/subsystems/keyring/keyring.o 00:03:26.489 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:26.489 CC module/event/subsystems/sock/sock.o 00:03:26.489 CC module/event/subsystems/fsdev/fsdev.o 00:03:26.489 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:26.489 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:26.489 CC module/event/subsystems/scheduler/scheduler.o 00:03:26.748 LIB libspdk_event_sock.a 00:03:26.748 LIB libspdk_event_keyring.a 00:03:26.748 LIB libspdk_event_vhost_blk.a 00:03:26.748 LIB libspdk_event_iobuf.a 00:03:26.748 LIB libspdk_event_scheduler.a 00:03:26.748 LIB libspdk_event_fsdev.a 00:03:26.748 LIB libspdk_event_vfu_tgt.a 00:03:26.748 LIB libspdk_event_vmd.a 00:03:26.748 SO libspdk_event_vhost_blk.so.3.0 00:03:26.748 SO libspdk_event_sock.so.5.0 00:03:26.748 SO libspdk_event_keyring.so.1.0 00:03:26.748 SO libspdk_event_scheduler.so.4.0 00:03:26.748 SO libspdk_event_iobuf.so.3.0 00:03:26.748 SO libspdk_event_fsdev.so.1.0 00:03:26.748 SO libspdk_event_vfu_tgt.so.3.0 00:03:26.748 SO libspdk_event_vmd.so.6.0 00:03:26.748 SYMLINK libspdk_event_vhost_blk.so 00:03:26.748 SYMLINK libspdk_event_scheduler.so 00:03:26.748 SYMLINK libspdk_event_sock.so 00:03:26.748 SYMLINK libspdk_event_keyring.so 00:03:26.748 SYMLINK libspdk_event_fsdev.so 00:03:26.748 SYMLINK libspdk_event_iobuf.so 00:03:26.748 SYMLINK libspdk_event_vfu_tgt.so 00:03:26.748 SYMLINK libspdk_event_vmd.so 00:03:27.007 CC module/event/subsystems/accel/accel.o 00:03:27.266 LIB libspdk_event_accel.a 00:03:27.266 SO libspdk_event_accel.so.6.0 00:03:27.266 SYMLINK libspdk_event_accel.so 00:03:27.524 CC module/event/subsystems/bdev/bdev.o 00:03:27.782 LIB libspdk_event_bdev.a 00:03:27.782 SO libspdk_event_bdev.so.6.0 00:03:27.782 SYMLINK libspdk_event_bdev.so 00:03:28.349 CC module/event/subsystems/nbd/nbd.o 00:03:28.349 CC module/event/subsystems/scsi/scsi.o 00:03:28.349 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:28.349 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:28.349 CC module/event/subsystems/ublk/ublk.o 00:03:28.349 LIB libspdk_event_ublk.a 00:03:28.349 LIB libspdk_event_nbd.a 00:03:28.349 LIB libspdk_event_scsi.a 00:03:28.349 SO libspdk_event_ublk.so.3.0 00:03:28.349 SO libspdk_event_nbd.so.6.0 00:03:28.349 SO libspdk_event_scsi.so.6.0 00:03:28.349 LIB libspdk_event_nvmf.a 00:03:28.349 SYMLINK libspdk_event_ublk.so 00:03:28.349 SYMLINK libspdk_event_nbd.so 00:03:28.349 SO libspdk_event_nvmf.so.6.0 00:03:28.608 SYMLINK libspdk_event_scsi.so 00:03:28.608 SYMLINK libspdk_event_nvmf.so 00:03:28.866 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:28.866 CC module/event/subsystems/iscsi/iscsi.o 00:03:28.866 LIB libspdk_event_vhost_scsi.a 00:03:28.866 LIB libspdk_event_iscsi.a 00:03:28.866 SO libspdk_event_vhost_scsi.so.3.0 00:03:28.866 SO libspdk_event_iscsi.so.6.0 00:03:29.125 SYMLINK libspdk_event_vhost_scsi.so 00:03:29.125 SYMLINK libspdk_event_iscsi.so 00:03:29.125 SO libspdk.so.6.0 00:03:29.125 SYMLINK libspdk.so 00:03:29.710 CXX app/trace/trace.o 00:03:29.710 CC app/spdk_lspci/spdk_lspci.o 00:03:29.710 CC app/trace_record/trace_record.o 00:03:29.710 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.710 CC app/spdk_top/spdk_top.o 00:03:29.710 CC app/spdk_nvme_perf/perf.o 00:03:29.710 CC test/rpc_client/rpc_client_test.o 00:03:29.710 TEST_HEADER include/spdk/accel.h 00:03:29.710 CC app/spdk_nvme_identify/identify.o 00:03:29.710 TEST_HEADER include/spdk/barrier.h 00:03:29.710 TEST_HEADER include/spdk/accel_module.h 00:03:29.710 TEST_HEADER include/spdk/assert.h 00:03:29.710 TEST_HEADER include/spdk/base64.h 00:03:29.710 TEST_HEADER include/spdk/bdev.h 00:03:29.710 TEST_HEADER include/spdk/bdev_module.h 00:03:29.710 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.710 TEST_HEADER include/spdk/bit_array.h 00:03:29.710 TEST_HEADER include/spdk/bit_pool.h 00:03:29.710 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.710 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.710 TEST_HEADER include/spdk/blobfs.h 00:03:29.710 TEST_HEADER include/spdk/blob.h 00:03:29.710 TEST_HEADER include/spdk/conf.h 00:03:29.710 TEST_HEADER include/spdk/config.h 00:03:29.710 TEST_HEADER include/spdk/cpuset.h 00:03:29.710 TEST_HEADER include/spdk/crc16.h 00:03:29.710 TEST_HEADER include/spdk/crc32.h 00:03:29.710 TEST_HEADER include/spdk/crc64.h 00:03:29.710 TEST_HEADER include/spdk/dif.h 00:03:29.710 TEST_HEADER include/spdk/dma.h 00:03:29.710 TEST_HEADER include/spdk/endian.h 00:03:29.710 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.710 TEST_HEADER include/spdk/env.h 00:03:29.710 TEST_HEADER include/spdk/event.h 00:03:29.710 TEST_HEADER include/spdk/fd_group.h 00:03:29.710 TEST_HEADER include/spdk/fd.h 00:03:29.710 TEST_HEADER include/spdk/file.h 00:03:29.710 TEST_HEADER include/spdk/fsdev.h 00:03:29.710 TEST_HEADER include/spdk/fsdev_module.h 00:03:29.710 TEST_HEADER include/spdk/ftl.h 00:03:29.710 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:29.710 CC app/spdk_dd/spdk_dd.o 00:03:29.710 TEST_HEADER include/spdk/gpt_spec.h 00:03:29.710 CC app/iscsi_tgt/iscsi_tgt.o 00:03:29.710 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:29.710 TEST_HEADER include/spdk/hexlify.h 00:03:29.710 TEST_HEADER include/spdk/histogram_data.h 00:03:29.710 TEST_HEADER include/spdk/idxd_spec.h 00:03:29.710 TEST_HEADER include/spdk/idxd.h 00:03:29.710 TEST_HEADER include/spdk/init.h 00:03:29.710 TEST_HEADER include/spdk/ioat.h 00:03:29.711 TEST_HEADER include/spdk/ioat_spec.h 00:03:29.711 TEST_HEADER include/spdk/iscsi_spec.h 00:03:29.711 TEST_HEADER include/spdk/json.h 00:03:29.711 CC app/nvmf_tgt/nvmf_main.o 00:03:29.711 TEST_HEADER include/spdk/jsonrpc.h 00:03:29.711 TEST_HEADER include/spdk/keyring_module.h 00:03:29.711 TEST_HEADER include/spdk/keyring.h 00:03:29.711 TEST_HEADER include/spdk/likely.h 00:03:29.711 TEST_HEADER include/spdk/lvol.h 00:03:29.711 CC app/spdk_tgt/spdk_tgt.o 00:03:29.711 TEST_HEADER include/spdk/log.h 00:03:29.711 TEST_HEADER include/spdk/mmio.h 00:03:29.711 TEST_HEADER include/spdk/memory.h 00:03:29.711 TEST_HEADER include/spdk/nbd.h 00:03:29.711 TEST_HEADER include/spdk/md5.h 00:03:29.711 TEST_HEADER include/spdk/net.h 00:03:29.711 TEST_HEADER include/spdk/nvme.h 00:03:29.711 TEST_HEADER include/spdk/nvme_intel.h 00:03:29.711 TEST_HEADER include/spdk/notify.h 00:03:29.711 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:29.711 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:29.711 TEST_HEADER include/spdk/nvme_spec.h 00:03:29.711 TEST_HEADER include/spdk/nvme_zns.h 00:03:29.711 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:29.711 TEST_HEADER include/spdk/nvmf_spec.h 00:03:29.711 TEST_HEADER include/spdk/nvmf_transport.h 00:03:29.711 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:29.711 TEST_HEADER include/spdk/nvmf.h 00:03:29.711 TEST_HEADER include/spdk/opal.h 00:03:29.711 TEST_HEADER include/spdk/opal_spec.h 00:03:29.711 TEST_HEADER include/spdk/pipe.h 00:03:29.711 TEST_HEADER include/spdk/pci_ids.h 00:03:29.711 TEST_HEADER include/spdk/queue.h 00:03:29.711 TEST_HEADER include/spdk/reduce.h 00:03:29.711 TEST_HEADER include/spdk/rpc.h 00:03:29.711 TEST_HEADER include/spdk/scheduler.h 00:03:29.711 TEST_HEADER include/spdk/scsi.h 00:03:29.711 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.711 TEST_HEADER include/spdk/sock.h 00:03:29.711 TEST_HEADER include/spdk/stdinc.h 00:03:29.711 TEST_HEADER include/spdk/string.h 00:03:29.711 TEST_HEADER include/spdk/thread.h 00:03:29.711 TEST_HEADER include/spdk/trace.h 00:03:29.711 TEST_HEADER include/spdk/trace_parser.h 00:03:29.711 TEST_HEADER include/spdk/tree.h 00:03:29.711 TEST_HEADER include/spdk/ublk.h 00:03:29.711 TEST_HEADER include/spdk/uuid.h 00:03:29.711 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.711 TEST_HEADER include/spdk/util.h 00:03:29.711 TEST_HEADER include/spdk/version.h 00:03:29.711 TEST_HEADER include/spdk/xor.h 00:03:29.711 TEST_HEADER include/spdk/vhost.h 00:03:29.711 TEST_HEADER include/spdk/vmd.h 00:03:29.711 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.711 TEST_HEADER include/spdk/zipf.h 00:03:29.711 CXX test/cpp_headers/assert.o 00:03:29.711 CXX test/cpp_headers/accel.o 00:03:29.711 CXX test/cpp_headers/accel_module.o 00:03:29.711 CXX test/cpp_headers/barrier.o 00:03:29.711 CXX test/cpp_headers/base64.o 00:03:29.711 CXX test/cpp_headers/bdev_module.o 00:03:29.711 CXX test/cpp_headers/bdev.o 00:03:29.711 CXX test/cpp_headers/bit_array.o 00:03:29.711 CXX test/cpp_headers/blob_bdev.o 00:03:29.711 CXX test/cpp_headers/bdev_zone.o 00:03:29.711 CXX test/cpp_headers/bit_pool.o 00:03:29.711 CXX test/cpp_headers/blobfs.o 00:03:29.711 CXX test/cpp_headers/blobfs_bdev.o 00:03:29.711 CXX test/cpp_headers/blob.o 00:03:29.711 CXX test/cpp_headers/config.o 00:03:29.711 CXX test/cpp_headers/conf.o 00:03:29.711 CXX test/cpp_headers/crc16.o 00:03:29.711 CXX test/cpp_headers/cpuset.o 00:03:29.711 CXX test/cpp_headers/crc32.o 00:03:29.711 CXX test/cpp_headers/crc64.o 00:03:29.711 CXX test/cpp_headers/dma.o 00:03:29.711 CXX test/cpp_headers/dif.o 00:03:29.711 CXX test/cpp_headers/endian.o 00:03:29.711 CXX test/cpp_headers/env.o 00:03:29.711 CXX test/cpp_headers/env_dpdk.o 00:03:29.711 CXX test/cpp_headers/event.o 00:03:29.711 CXX test/cpp_headers/fd_group.o 00:03:29.711 CXX test/cpp_headers/fd.o 00:03:29.711 CXX test/cpp_headers/file.o 00:03:29.711 CXX test/cpp_headers/fsdev_module.o 00:03:29.711 CXX test/cpp_headers/fsdev.o 00:03:29.711 CXX test/cpp_headers/fuse_dispatcher.o 00:03:29.711 CXX test/cpp_headers/ftl.o 00:03:29.711 CXX test/cpp_headers/gpt_spec.o 00:03:29.711 CXX test/cpp_headers/hexlify.o 00:03:29.711 CXX test/cpp_headers/idxd.o 00:03:29.711 CXX test/cpp_headers/histogram_data.o 00:03:29.711 CXX test/cpp_headers/init.o 00:03:29.711 CXX test/cpp_headers/idxd_spec.o 00:03:29.711 CXX test/cpp_headers/ioat_spec.o 00:03:29.711 CXX test/cpp_headers/ioat.o 00:03:29.711 CXX test/cpp_headers/iscsi_spec.o 00:03:29.711 CXX test/cpp_headers/json.o 00:03:29.711 CXX test/cpp_headers/keyring_module.o 00:03:29.711 CXX test/cpp_headers/jsonrpc.o 00:03:29.711 CXX test/cpp_headers/likely.o 00:03:29.711 CXX test/cpp_headers/keyring.o 00:03:29.711 CXX test/cpp_headers/log.o 00:03:29.711 CXX test/cpp_headers/memory.o 00:03:29.711 CXX test/cpp_headers/md5.o 00:03:29.711 CXX test/cpp_headers/lvol.o 00:03:29.711 CXX test/cpp_headers/mmio.o 00:03:29.711 CXX test/cpp_headers/nbd.o 00:03:29.711 CXX test/cpp_headers/net.o 00:03:29.711 CXX test/cpp_headers/nvme.o 00:03:29.711 CXX test/cpp_headers/notify.o 00:03:29.711 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.711 CXX test/cpp_headers/nvme_intel.o 00:03:29.711 CXX test/cpp_headers/nvme_spec.o 00:03:29.711 CXX test/cpp_headers/nvme_zns.o 00:03:29.711 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.711 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:29.711 CXX test/cpp_headers/nvmf_cmd.o 00:03:29.711 CC examples/ioat/verify/verify.o 00:03:29.711 CXX test/cpp_headers/nvmf.o 00:03:29.711 CXX test/cpp_headers/nvmf_spec.o 00:03:29.711 CC examples/ioat/perf/perf.o 00:03:29.711 CXX test/cpp_headers/nvmf_transport.o 00:03:29.711 CXX test/cpp_headers/opal.o 00:03:29.711 CC test/app/histogram_perf/histogram_perf.o 00:03:29.711 CC examples/util/zipf/zipf.o 00:03:29.711 CC test/app/jsoncat/jsoncat.o 00:03:29.711 CC test/env/memory/memory_ut.o 00:03:29.711 CC test/env/vtophys/vtophys.o 00:03:29.711 CC test/dma/test_dma/test_dma.o 00:03:29.711 CC test/app/bdev_svc/bdev_svc.o 00:03:29.711 CC app/fio/nvme/fio_plugin.o 00:03:29.711 CC test/app/stub/stub.o 00:03:29.711 CC test/thread/poller_perf/poller_perf.o 00:03:29.711 CXX test/cpp_headers/opal_spec.o 00:03:29.711 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:29.711 CC test/env/pci/pci_ut.o 00:03:30.002 CC app/fio/bdev/fio_plugin.o 00:03:30.002 LINK rpc_client_test 00:03:30.002 LINK spdk_lspci 00:03:30.002 LINK spdk_nvme_discover 00:03:30.002 LINK interrupt_tgt 00:03:30.277 CC test/env/mem_callbacks/mem_callbacks.o 00:03:30.277 LINK spdk_trace_record 00:03:30.277 LINK iscsi_tgt 00:03:30.277 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:30.277 LINK jsoncat 00:03:30.277 LINK histogram_perf 00:03:30.277 LINK nvmf_tgt 00:03:30.277 LINK vtophys 00:03:30.277 CXX test/cpp_headers/pci_ids.o 00:03:30.277 CXX test/cpp_headers/pipe.o 00:03:30.278 CXX test/cpp_headers/queue.o 00:03:30.278 CXX test/cpp_headers/reduce.o 00:03:30.278 CXX test/cpp_headers/rpc.o 00:03:30.278 CXX test/cpp_headers/scheduler.o 00:03:30.278 LINK stub 00:03:30.278 CXX test/cpp_headers/scsi.o 00:03:30.278 LINK bdev_svc 00:03:30.278 LINK ioat_perf 00:03:30.278 CXX test/cpp_headers/scsi_spec.o 00:03:30.278 CXX test/cpp_headers/sock.o 00:03:30.278 CXX test/cpp_headers/stdinc.o 00:03:30.278 CXX test/cpp_headers/string.o 00:03:30.278 CXX test/cpp_headers/thread.o 00:03:30.278 CXX test/cpp_headers/trace.o 00:03:30.278 CXX test/cpp_headers/trace_parser.o 00:03:30.278 CXX test/cpp_headers/tree.o 00:03:30.278 CXX test/cpp_headers/ublk.o 00:03:30.278 CXX test/cpp_headers/util.o 00:03:30.278 CXX test/cpp_headers/uuid.o 00:03:30.278 CXX test/cpp_headers/version.o 00:03:30.278 CXX test/cpp_headers/vfio_user_pci.o 00:03:30.278 CXX test/cpp_headers/vfio_user_spec.o 00:03:30.278 CXX test/cpp_headers/vhost.o 00:03:30.278 CXX test/cpp_headers/vmd.o 00:03:30.278 CXX test/cpp_headers/xor.o 00:03:30.278 LINK spdk_tgt 00:03:30.278 CXX test/cpp_headers/zipf.o 00:03:30.278 LINK zipf 00:03:30.565 LINK poller_perf 00:03:30.565 LINK env_dpdk_post_init 00:03:30.565 LINK verify 00:03:30.565 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:30.565 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:30.565 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.565 LINK spdk_dd 00:03:30.565 LINK spdk_trace 00:03:30.565 LINK pci_ut 00:03:30.869 LINK test_dma 00:03:30.869 LINK spdk_bdev 00:03:30.869 LINK spdk_nvme 00:03:30.869 LINK spdk_nvme_identify 00:03:30.869 LINK nvme_fuzz 00:03:30.869 LINK spdk_nvme_perf 00:03:30.869 CC test/event/event_perf/event_perf.o 00:03:30.869 CC test/event/reactor/reactor.o 00:03:30.869 CC test/event/app_repeat/app_repeat.o 00:03:30.869 CC test/event/reactor_perf/reactor_perf.o 00:03:30.869 CC examples/vmd/lsvmd/lsvmd.o 00:03:30.869 LINK vhost_fuzz 00:03:30.869 CC examples/sock/hello_world/hello_sock.o 00:03:30.869 CC examples/vmd/led/led.o 00:03:30.869 CC test/event/scheduler/scheduler.o 00:03:30.869 CC examples/idxd/perf/perf.o 00:03:30.869 CC examples/thread/thread/thread_ex.o 00:03:30.869 LINK spdk_top 00:03:30.869 LINK mem_callbacks 00:03:31.138 CC app/vhost/vhost.o 00:03:31.138 LINK reactor 00:03:31.138 LINK reactor_perf 00:03:31.138 LINK event_perf 00:03:31.138 LINK app_repeat 00:03:31.138 LINK lsvmd 00:03:31.138 LINK led 00:03:31.138 LINK hello_sock 00:03:31.138 LINK scheduler 00:03:31.138 CC test/nvme/err_injection/err_injection.o 00:03:31.138 CC test/nvme/aer/aer.o 00:03:31.138 LINK thread 00:03:31.138 CC test/nvme/overhead/overhead.o 00:03:31.138 CC test/nvme/sgl/sgl.o 00:03:31.138 CC test/nvme/reset/reset.o 00:03:31.138 CC test/nvme/cuse/cuse.o 00:03:31.138 CC test/nvme/e2edp/nvme_dp.o 00:03:31.138 CC test/nvme/boot_partition/boot_partition.o 00:03:31.138 CC test/nvme/simple_copy/simple_copy.o 00:03:31.138 CC test/nvme/fused_ordering/fused_ordering.o 00:03:31.138 CC test/nvme/fdp/fdp.o 00:03:31.138 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:31.138 LINK memory_ut 00:03:31.138 CC test/nvme/connect_stress/connect_stress.o 00:03:31.138 CC test/nvme/compliance/nvme_compliance.o 00:03:31.138 CC test/nvme/reserve/reserve.o 00:03:31.138 LINK idxd_perf 00:03:31.138 CC test/nvme/startup/startup.o 00:03:31.138 CC test/accel/dif/dif.o 00:03:31.138 LINK vhost 00:03:31.138 CC test/blobfs/mkfs/mkfs.o 00:03:31.396 CC test/lvol/esnap/esnap.o 00:03:31.396 LINK err_injection 00:03:31.396 LINK boot_partition 00:03:31.396 LINK startup 00:03:31.396 LINK doorbell_aers 00:03:31.396 LINK fused_ordering 00:03:31.396 LINK connect_stress 00:03:31.396 LINK reserve 00:03:31.396 LINK reset 00:03:31.396 LINK nvme_dp 00:03:31.396 LINK aer 00:03:31.396 LINK sgl 00:03:31.396 LINK simple_copy 00:03:31.396 LINK overhead 00:03:31.396 LINK mkfs 00:03:31.396 LINK fdp 00:03:31.396 LINK nvme_compliance 00:03:31.655 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:31.655 CC examples/nvme/abort/abort.o 00:03:31.655 CC examples/nvme/hello_world/hello_world.o 00:03:31.655 CC examples/nvme/arbitration/arbitration.o 00:03:31.655 CC examples/nvme/hotplug/hotplug.o 00:03:31.655 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.655 CC examples/nvme/reconnect/reconnect.o 00:03:31.655 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:31.655 CC examples/accel/perf/accel_perf.o 00:03:31.655 CC examples/blob/hello_world/hello_blob.o 00:03:31.655 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:31.655 CC examples/blob/cli/blobcli.o 00:03:31.655 LINK pmr_persistence 00:03:31.655 LINK cmb_copy 00:03:31.655 LINK dif 00:03:31.913 LINK hello_world 00:03:31.913 LINK hotplug 00:03:31.913 LINK arbitration 00:03:31.913 LINK iscsi_fuzz 00:03:31.913 LINK reconnect 00:03:31.913 LINK hello_blob 00:03:31.913 LINK abort 00:03:31.913 LINK hello_fsdev 00:03:31.913 LINK nvme_manage 00:03:31.913 LINK accel_perf 00:03:31.913 LINK blobcli 00:03:32.171 LINK cuse 00:03:32.171 CC test/bdev/bdevio/bdevio.o 00:03:32.428 CC examples/bdev/hello_world/hello_bdev.o 00:03:32.428 CC examples/bdev/bdevperf/bdevperf.o 00:03:32.687 LINK bdevio 00:03:32.687 LINK hello_bdev 00:03:33.253 LINK bdevperf 00:03:33.821 CC examples/nvmf/nvmf/nvmf.o 00:03:33.821 LINK nvmf 00:03:35.200 LINK esnap 00:03:35.200 00:03:35.200 real 0m55.241s 00:03:35.200 user 8m1.090s 00:03:35.200 sys 3m39.133s 00:03:35.200 10:30:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:35.200 10:30:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:35.200 ************************************ 00:03:35.200 END TEST make 00:03:35.200 ************************************ 00:03:35.200 10:30:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:35.200 10:30:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:35.200 10:30:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:35.200 10:30:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.200 10:30:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:35.200 10:30:42 -- pm/common@44 -- $ pid=1415688 00:03:35.200 10:30:42 -- pm/common@50 -- $ kill -TERM 1415688 00:03:35.200 10:30:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.200 10:30:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:35.200 10:30:42 -- pm/common@44 -- $ pid=1415690 00:03:35.200 10:30:42 -- pm/common@50 -- $ kill -TERM 1415690 00:03:35.200 10:30:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.200 10:30:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:35.200 10:30:42 -- pm/common@44 -- $ pid=1415691 00:03:35.200 10:30:42 -- pm/common@50 -- $ kill -TERM 1415691 00:03:35.200 10:30:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.200 10:30:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:35.200 10:30:42 -- pm/common@44 -- $ pid=1415716 00:03:35.200 10:30:42 -- pm/common@50 -- $ sudo -E kill -TERM 1415716 00:03:35.200 10:30:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:35.200 10:30:42 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:35.200 10:30:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:35.461 10:30:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:35.461 10:30:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:35.461 10:30:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:35.461 10:30:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.461 10:30:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.461 10:30:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.461 10:30:42 -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.461 10:30:42 -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.461 10:30:42 -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.461 10:30:42 -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.461 10:30:42 -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.461 10:30:42 -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.461 10:30:42 -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.461 10:30:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.461 10:30:42 -- scripts/common.sh@344 -- # case "$op" in 00:03:35.461 10:30:42 -- scripts/common.sh@345 -- # : 1 00:03:35.461 10:30:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.461 10:30:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.461 10:30:42 -- scripts/common.sh@365 -- # decimal 1 00:03:35.461 10:30:42 -- scripts/common.sh@353 -- # local d=1 00:03:35.461 10:30:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.461 10:30:42 -- scripts/common.sh@355 -- # echo 1 00:03:35.461 10:30:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.461 10:30:42 -- scripts/common.sh@366 -- # decimal 2 00:03:35.461 10:30:42 -- scripts/common.sh@353 -- # local d=2 00:03:35.461 10:30:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.461 10:30:42 -- scripts/common.sh@355 -- # echo 2 00:03:35.461 10:30:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.461 10:30:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.461 10:30:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.461 10:30:42 -- scripts/common.sh@368 -- # return 0 00:03:35.461 10:30:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.461 10:30:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:35.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.461 --rc genhtml_branch_coverage=1 00:03:35.461 --rc genhtml_function_coverage=1 00:03:35.461 --rc genhtml_legend=1 00:03:35.461 --rc geninfo_all_blocks=1 00:03:35.461 --rc geninfo_unexecuted_blocks=1 00:03:35.461 00:03:35.461 ' 00:03:35.461 10:30:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:35.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.461 --rc genhtml_branch_coverage=1 00:03:35.461 --rc genhtml_function_coverage=1 00:03:35.461 --rc genhtml_legend=1 00:03:35.461 --rc geninfo_all_blocks=1 00:03:35.461 --rc geninfo_unexecuted_blocks=1 00:03:35.461 00:03:35.461 ' 00:03:35.461 10:30:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:35.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.461 --rc genhtml_branch_coverage=1 00:03:35.461 --rc genhtml_function_coverage=1 00:03:35.461 --rc genhtml_legend=1 00:03:35.461 --rc geninfo_all_blocks=1 00:03:35.461 --rc geninfo_unexecuted_blocks=1 00:03:35.461 00:03:35.461 ' 00:03:35.461 10:30:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:35.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.461 --rc genhtml_branch_coverage=1 00:03:35.461 --rc genhtml_function_coverage=1 00:03:35.461 --rc genhtml_legend=1 00:03:35.461 --rc geninfo_all_blocks=1 00:03:35.461 --rc geninfo_unexecuted_blocks=1 00:03:35.461 00:03:35.461 ' 00:03:35.461 10:30:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:35.461 10:30:42 -- nvmf/common.sh@7 -- # uname -s 00:03:35.461 10:30:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:35.461 10:30:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:35.461 10:30:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:35.461 10:30:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:35.461 10:30:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:35.461 10:30:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:35.461 10:30:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:35.461 10:30:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:35.461 10:30:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:35.461 10:30:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:35.461 10:30:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:35.461 10:30:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:35.461 10:30:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:35.461 10:30:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:35.461 10:30:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:35.461 10:30:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:35.461 10:30:42 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:35.461 10:30:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:35.461 10:30:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:35.461 10:30:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:35.461 10:30:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:35.461 10:30:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.461 10:30:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.461 10:30:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.461 10:30:42 -- paths/export.sh@5 -- # export PATH 00:03:35.461 10:30:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.461 10:30:42 -- nvmf/common.sh@51 -- # : 0 00:03:35.461 10:30:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:35.461 10:30:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:35.461 10:30:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:35.461 10:30:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:35.461 10:30:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:35.461 10:30:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:35.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:35.461 10:30:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:35.461 10:30:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:35.461 10:30:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:35.461 10:30:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:35.461 10:30:42 -- spdk/autotest.sh@32 -- # uname -s 00:03:35.461 10:30:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:35.461 10:30:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:35.461 10:30:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:35.461 10:30:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:35.461 10:30:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:35.461 10:30:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:35.461 10:30:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:35.461 10:30:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:35.461 10:30:42 -- spdk/autotest.sh@48 -- # udevadm_pid=1478457 00:03:35.461 10:30:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:35.461 10:30:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:35.461 10:30:42 -- pm/common@17 -- # local monitor 00:03:35.461 10:30:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.461 10:30:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.461 10:30:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.461 10:30:42 -- pm/common@21 -- # date +%s 00:03:35.461 10:30:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.461 10:30:42 -- pm/common@21 -- # date +%s 00:03:35.461 10:30:42 -- pm/common@25 -- # sleep 1 00:03:35.461 10:30:42 -- pm/common@21 -- # date +%s 00:03:35.461 10:30:42 -- pm/common@21 -- # date +%s 00:03:35.461 10:30:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008642 00:03:35.461 10:30:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008642 00:03:35.461 10:30:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008642 00:03:35.461 10:30:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008642 00:03:35.461 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008642_collect-cpu-load.pm.log 00:03:35.461 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008642_collect-vmstat.pm.log 00:03:35.462 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008642_collect-cpu-temp.pm.log 00:03:35.462 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008642_collect-bmc-pm.bmc.pm.log 00:03:36.401 10:30:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:36.401 10:30:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:36.401 10:30:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.401 10:30:43 -- common/autotest_common.sh@10 -- # set +x 00:03:36.401 10:30:43 -- spdk/autotest.sh@59 -- # create_test_list 00:03:36.401 10:30:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:36.401 10:30:43 -- common/autotest_common.sh@10 -- # set +x 00:03:36.401 10:30:43 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:36.401 10:30:43 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.660 10:30:43 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.660 10:30:43 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:36.660 10:30:43 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.660 10:30:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:36.660 10:30:43 -- common/autotest_common.sh@1457 -- # uname 00:03:36.660 10:30:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:36.660 10:30:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:36.660 10:30:43 -- common/autotest_common.sh@1477 -- # uname 00:03:36.660 10:30:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:36.660 10:30:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:36.660 10:30:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:36.660 lcov: LCOV version 1.15 00:03:36.660 10:30:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:58.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:58.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:01.880 10:31:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:01.881 10:31:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.881 10:31:09 -- common/autotest_common.sh@10 -- # set +x 00:04:01.881 10:31:09 -- spdk/autotest.sh@78 -- # rm -f 00:04:01.881 10:31:09 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.177 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:05.177 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:05.177 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:05.177 10:31:12 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:05.177 10:31:12 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:05.177 10:31:12 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:05.177 10:31:12 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:05.177 10:31:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.177 10:31:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:05.177 10:31:12 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:05.177 10:31:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.177 10:31:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.177 10:31:12 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:05.177 10:31:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.177 10:31:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.177 10:31:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:05.177 10:31:12 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:05.177 10:31:12 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:05.177 No valid GPT data, bailing 00:04:05.177 10:31:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.177 10:31:12 -- scripts/common.sh@394 -- # pt= 00:04:05.177 10:31:12 -- scripts/common.sh@395 -- # return 1 00:04:05.177 10:31:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:05.177 1+0 records in 00:04:05.177 1+0 records out 00:04:05.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0015374 s, 682 MB/s 00:04:05.177 10:31:12 -- spdk/autotest.sh@105 -- # sync 00:04:05.177 10:31:12 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:05.177 10:31:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:05.177 10:31:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:11.747 10:31:17 -- spdk/autotest.sh@111 -- # uname -s 00:04:11.747 10:31:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:11.747 10:31:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:11.747 10:31:17 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:13.652 Hugepages 00:04:13.652 node hugesize free / total 00:04:13.652 node0 1048576kB 0 / 0 00:04:13.652 node0 2048kB 0 / 0 00:04:13.652 node1 1048576kB 0 / 0 00:04:13.652 node1 2048kB 0 / 0 00:04:13.652 00:04:13.652 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.652 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:13.652 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:13.652 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:13.652 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:13.652 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:13.652 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:13.652 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:13.652 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:13.652 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:13.652 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:13.652 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:13.652 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:13.652 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:13.652 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:13.652 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:13.652 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:13.652 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:13.652 10:31:20 -- spdk/autotest.sh@117 -- # uname -s 00:04:13.652 10:31:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:13.652 10:31:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:13.652 10:31:20 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.943 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.943 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.512 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:17.512 10:31:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:18.449 10:31:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:18.449 10:31:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:18.449 10:31:25 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.449 10:31:25 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:18.449 10:31:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:18.449 10:31:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:18.449 10:31:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.449 10:31:25 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:18.449 10:31:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:18.449 10:31:25 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:18.449 10:31:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:18.449 10:31:25 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.783 Waiting for block devices as requested 00:04:21.783 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:21.783 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:21.783 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:21.783 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:21.783 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:21.783 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:21.783 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:22.043 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:22.043 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:22.043 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:22.303 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:22.303 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:22.303 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:22.303 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:22.562 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:22.562 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:22.562 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:22.822 10:31:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:22.822 10:31:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:22.822 10:31:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:22.822 10:31:30 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:22.822 10:31:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:22.822 10:31:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:22.822 10:31:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:22.822 10:31:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:22.822 10:31:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:22.822 10:31:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:22.822 10:31:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:22.822 10:31:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:22.823 10:31:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:22.823 10:31:30 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:22.823 10:31:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:22.823 10:31:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:22.823 10:31:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:22.823 10:31:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:22.823 10:31:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:22.823 10:31:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:22.823 10:31:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:22.823 10:31:30 -- common/autotest_common.sh@1543 -- # continue 00:04:22.823 10:31:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:22.823 10:31:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.823 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:22.823 10:31:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:22.823 10:31:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.823 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:22.823 10:31:30 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.116 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.116 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.685 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.685 10:31:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:26.685 10:31:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.685 10:31:34 -- common/autotest_common.sh@10 -- # set +x 00:04:26.685 10:31:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:26.685 10:31:34 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:26.685 10:31:34 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.685 10:31:34 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:26.685 10:31:34 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:26.685 10:31:34 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:26.685 10:31:34 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:26.685 10:31:34 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:26.685 10:31:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:26.685 10:31:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:26.685 10:31:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.685 10:31:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:26.685 10:31:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:26.944 10:31:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:26.945 10:31:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:26.945 10:31:34 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:26.945 10:31:34 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:26.945 10:31:34 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:26.945 10:31:34 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:26.945 10:31:34 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:26.945 10:31:34 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:26.945 10:31:34 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:26.945 10:31:34 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:26.945 10:31:34 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1492893 00:04:26.945 10:31:34 -- common/autotest_common.sh@1585 -- # waitforlisten 1492893 00:04:26.945 10:31:34 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.945 10:31:34 -- common/autotest_common.sh@835 -- # '[' -z 1492893 ']' 00:04:26.945 10:31:34 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.945 10:31:34 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.945 10:31:34 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.945 10:31:34 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.945 10:31:34 -- common/autotest_common.sh@10 -- # set +x 00:04:26.945 [2024-11-19 10:31:34.213484] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:04:26.945 [2024-11-19 10:31:34.213537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492893 ] 00:04:26.945 [2024-11-19 10:31:34.287626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.945 [2024-11-19 10:31:34.330972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.204 10:31:34 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.204 10:31:34 -- common/autotest_common.sh@868 -- # return 0 00:04:27.204 10:31:34 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:27.204 10:31:34 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:27.204 10:31:34 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:30.493 nvme0n1 00:04:30.493 10:31:37 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:30.493 [2024-11-19 10:31:37.734778] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:30.493 request: 00:04:30.493 { 00:04:30.493 "nvme_ctrlr_name": "nvme0", 00:04:30.493 "password": "test", 00:04:30.493 "method": "bdev_nvme_opal_revert", 00:04:30.493 "req_id": 1 00:04:30.493 } 00:04:30.493 Got JSON-RPC error response 00:04:30.493 response: 00:04:30.493 { 00:04:30.493 "code": -32602, 00:04:30.493 "message": "Invalid parameters" 00:04:30.493 } 00:04:30.493 10:31:37 -- common/autotest_common.sh@1591 -- # true 00:04:30.493 10:31:37 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:30.493 10:31:37 -- common/autotest_common.sh@1595 -- # killprocess 1492893 00:04:30.493 10:31:37 -- common/autotest_common.sh@954 -- # '[' -z 1492893 ']' 00:04:30.493 10:31:37 -- common/autotest_common.sh@958 -- # kill -0 1492893 00:04:30.493 10:31:37 -- common/autotest_common.sh@959 -- # uname 00:04:30.493 10:31:37 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.493 10:31:37 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1492893 00:04:30.494 10:31:37 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.494 10:31:37 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.494 10:31:37 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1492893' 00:04:30.494 killing process with pid 1492893 00:04:30.494 10:31:37 -- common/autotest_common.sh@973 -- # kill 1492893 00:04:30.494 10:31:37 -- common/autotest_common.sh@978 -- # wait 1492893 00:04:32.395 10:31:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:32.395 10:31:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:32.395 10:31:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.395 10:31:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.395 10:31:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:32.395 10:31:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.395 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:32.395 10:31:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:32.395 10:31:39 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.395 10:31:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.395 10:31:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.395 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:32.395 ************************************ 00:04:32.395 START TEST env 00:04:32.395 ************************************ 00:04:32.395 10:31:39 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.395 * Looking for test storage... 00:04:32.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:32.395 10:31:39 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.395 10:31:39 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.395 10:31:39 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.395 10:31:39 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.395 10:31:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.395 10:31:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.395 10:31:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.395 10:31:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.395 10:31:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.395 10:31:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.395 10:31:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.395 10:31:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.395 10:31:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.395 10:31:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.395 10:31:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.395 10:31:39 env -- scripts/common.sh@344 -- # case "$op" in 00:04:32.396 10:31:39 env -- scripts/common.sh@345 -- # : 1 00:04:32.396 10:31:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.396 10:31:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.396 10:31:39 env -- scripts/common.sh@365 -- # decimal 1 00:04:32.396 10:31:39 env -- scripts/common.sh@353 -- # local d=1 00:04:32.396 10:31:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.396 10:31:39 env -- scripts/common.sh@355 -- # echo 1 00:04:32.396 10:31:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.396 10:31:39 env -- scripts/common.sh@366 -- # decimal 2 00:04:32.396 10:31:39 env -- scripts/common.sh@353 -- # local d=2 00:04:32.396 10:31:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.396 10:31:39 env -- scripts/common.sh@355 -- # echo 2 00:04:32.396 10:31:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.396 10:31:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.396 10:31:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.396 10:31:39 env -- scripts/common.sh@368 -- # return 0 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.396 --rc genhtml_branch_coverage=1 00:04:32.396 --rc genhtml_function_coverage=1 00:04:32.396 --rc genhtml_legend=1 00:04:32.396 --rc geninfo_all_blocks=1 00:04:32.396 --rc geninfo_unexecuted_blocks=1 00:04:32.396 00:04:32.396 ' 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.396 --rc genhtml_branch_coverage=1 00:04:32.396 --rc genhtml_function_coverage=1 00:04:32.396 --rc genhtml_legend=1 00:04:32.396 --rc geninfo_all_blocks=1 00:04:32.396 --rc geninfo_unexecuted_blocks=1 00:04:32.396 00:04:32.396 ' 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.396 --rc genhtml_branch_coverage=1 00:04:32.396 --rc genhtml_function_coverage=1 00:04:32.396 --rc genhtml_legend=1 00:04:32.396 --rc geninfo_all_blocks=1 00:04:32.396 --rc geninfo_unexecuted_blocks=1 00:04:32.396 00:04:32.396 ' 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.396 --rc genhtml_branch_coverage=1 00:04:32.396 --rc genhtml_function_coverage=1 00:04:32.396 --rc genhtml_legend=1 00:04:32.396 --rc geninfo_all_blocks=1 00:04:32.396 --rc geninfo_unexecuted_blocks=1 00:04:32.396 00:04:32.396 ' 00:04:32.396 10:31:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.396 10:31:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.396 ************************************ 00:04:32.396 START TEST env_memory 00:04:32.396 ************************************ 00:04:32.396 10:31:39 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.396 00:04:32.396 00:04:32.396 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.396 http://cunit.sourceforge.net/ 00:04:32.396 00:04:32.396 00:04:32.396 Suite: memory 00:04:32.396 Test: alloc and free memory map ...[2024-11-19 10:31:39.668180] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.396 passed 00:04:32.396 Test: mem map translation ...[2024-11-19 10:31:39.688404] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.396 [2024-11-19 10:31:39.688420] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.396 [2024-11-19 10:31:39.688470] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.396 [2024-11-19 10:31:39.688476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.396 passed 00:04:32.396 Test: mem map registration ...[2024-11-19 10:31:39.728664] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:32.396 [2024-11-19 10:31:39.728678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:32.396 passed 00:04:32.396 Test: mem map adjacent registrations ...passed 00:04:32.396 00:04:32.396 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.396 suites 1 1 n/a 0 0 00:04:32.396 tests 4 4 4 0 0 00:04:32.396 asserts 152 152 152 0 n/a 00:04:32.396 00:04:32.396 Elapsed time = 0.146 seconds 00:04:32.396 00:04:32.396 real 0m0.159s 00:04:32.396 user 0m0.149s 00:04:32.396 sys 0m0.009s 00:04:32.396 10:31:39 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.396 10:31:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.396 ************************************ 00:04:32.396 END TEST env_memory 00:04:32.396 ************************************ 00:04:32.396 10:31:39 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.396 10:31:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.396 10:31:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.656 ************************************ 00:04:32.656 START TEST env_vtophys 00:04:32.656 ************************************ 00:04:32.656 10:31:39 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.656 EAL: lib.eal log level changed from notice to debug 00:04:32.656 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.656 EAL: Detected lcore 1 as core 1 on socket 0 00:04:32.656 EAL: Detected lcore 2 as core 2 on socket 0 00:04:32.656 EAL: Detected lcore 3 as core 3 on socket 0 00:04:32.656 EAL: Detected lcore 4 as core 4 on socket 0 00:04:32.656 EAL: Detected lcore 5 as core 5 on socket 0 00:04:32.656 EAL: Detected lcore 6 as core 6 on socket 0 00:04:32.656 EAL: Detected lcore 7 as core 8 on socket 0 00:04:32.656 EAL: Detected lcore 8 as core 9 on socket 0 00:04:32.656 EAL: Detected lcore 9 as core 10 on socket 0 00:04:32.656 EAL: Detected lcore 10 as core 11 on socket 0 00:04:32.656 EAL: Detected lcore 11 as core 12 on socket 0 00:04:32.656 EAL: Detected lcore 12 as core 13 on socket 0 00:04:32.656 EAL: Detected lcore 13 as core 16 on socket 0 00:04:32.656 EAL: Detected lcore 14 as core 17 on socket 0 00:04:32.656 EAL: Detected lcore 15 as core 18 on socket 0 00:04:32.656 EAL: Detected lcore 16 as core 19 on socket 0 00:04:32.656 EAL: Detected lcore 17 as core 20 on socket 0 00:04:32.656 EAL: Detected lcore 18 as core 21 on socket 0 00:04:32.656 EAL: Detected lcore 19 as core 25 on socket 0 00:04:32.656 EAL: Detected lcore 20 as core 26 on socket 0 00:04:32.656 EAL: Detected lcore 21 as core 27 on socket 0 00:04:32.656 EAL: Detected lcore 22 as core 28 on socket 0 00:04:32.656 EAL: Detected lcore 23 as core 29 on socket 0 00:04:32.656 EAL: Detected lcore 24 as core 0 on socket 1 00:04:32.656 EAL: Detected lcore 25 as core 1 on socket 1 00:04:32.656 EAL: Detected lcore 26 as core 2 on socket 1 00:04:32.656 EAL: Detected lcore 27 as core 3 on socket 1 00:04:32.656 EAL: Detected lcore 28 as core 4 on socket 1 00:04:32.656 EAL: Detected lcore 29 as core 5 on socket 1 00:04:32.656 EAL: Detected lcore 30 as core 6 on socket 1 00:04:32.656 EAL: Detected lcore 31 as core 9 on socket 1 00:04:32.656 EAL: Detected lcore 32 as core 10 on socket 1 00:04:32.656 EAL: Detected lcore 33 as core 11 on socket 1 00:04:32.656 EAL: Detected lcore 34 as core 12 on socket 1 00:04:32.656 EAL: Detected lcore 35 as core 13 on socket 1 00:04:32.656 EAL: Detected lcore 36 as core 16 on socket 1 00:04:32.656 EAL: Detected lcore 37 as core 17 on socket 1 00:04:32.656 EAL: Detected lcore 38 as core 18 on socket 1 00:04:32.656 EAL: Detected lcore 39 as core 19 on socket 1 00:04:32.656 EAL: Detected lcore 40 as core 20 on socket 1 00:04:32.656 EAL: Detected lcore 41 as core 21 on socket 1 00:04:32.656 EAL: Detected lcore 42 as core 24 on socket 1 00:04:32.656 EAL: Detected lcore 43 as core 25 on socket 1 00:04:32.656 EAL: Detected lcore 44 as core 26 on socket 1 00:04:32.656 EAL: Detected lcore 45 as core 27 on socket 1 00:04:32.656 EAL: Detected lcore 46 as core 28 on socket 1 00:04:32.656 EAL: Detected lcore 47 as core 29 on socket 1 00:04:32.656 EAL: Detected lcore 48 as core 0 on socket 0 00:04:32.656 EAL: Detected lcore 49 as core 1 on socket 0 00:04:32.656 EAL: Detected lcore 50 as core 2 on socket 0 00:04:32.656 EAL: Detected lcore 51 as core 3 on socket 0 00:04:32.656 EAL: Detected lcore 52 as core 4 on socket 0 00:04:32.656 EAL: Detected lcore 53 as core 5 on socket 0 00:04:32.656 EAL: Detected lcore 54 as core 6 on socket 0 00:04:32.656 EAL: Detected lcore 55 as core 8 on socket 0 00:04:32.656 EAL: Detected lcore 56 as core 9 on socket 0 00:04:32.656 EAL: Detected lcore 57 as core 10 on socket 0 00:04:32.656 EAL: Detected lcore 58 as core 11 on socket 0 00:04:32.656 EAL: Detected lcore 59 as core 12 on socket 0 00:04:32.656 EAL: Detected lcore 60 as core 13 on socket 0 00:04:32.656 EAL: Detected lcore 61 as core 16 on socket 0 00:04:32.656 EAL: Detected lcore 62 as core 17 on socket 0 00:04:32.656 EAL: Detected lcore 63 as core 18 on socket 0 00:04:32.656 EAL: Detected lcore 64 as core 19 on socket 0 00:04:32.656 EAL: Detected lcore 65 as core 20 on socket 0 00:04:32.656 EAL: Detected lcore 66 as core 21 on socket 0 00:04:32.656 EAL: Detected lcore 67 as core 25 on socket 0 00:04:32.656 EAL: Detected lcore 68 as core 26 on socket 0 00:04:32.656 EAL: Detected lcore 69 as core 27 on socket 0 00:04:32.656 EAL: Detected lcore 70 as core 28 on socket 0 00:04:32.656 EAL: Detected lcore 71 as core 29 on socket 0 00:04:32.656 EAL: Detected lcore 72 as core 0 on socket 1 00:04:32.656 EAL: Detected lcore 73 as core 1 on socket 1 00:04:32.656 EAL: Detected lcore 74 as core 2 on socket 1 00:04:32.656 EAL: Detected lcore 75 as core 3 on socket 1 00:04:32.656 EAL: Detected lcore 76 as core 4 on socket 1 00:04:32.656 EAL: Detected lcore 77 as core 5 on socket 1 00:04:32.656 EAL: Detected lcore 78 as core 6 on socket 1 00:04:32.656 EAL: Detected lcore 79 as core 9 on socket 1 00:04:32.656 EAL: Detected lcore 80 as core 10 on socket 1 00:04:32.656 EAL: Detected lcore 81 as core 11 on socket 1 00:04:32.656 EAL: Detected lcore 82 as core 12 on socket 1 00:04:32.656 EAL: Detected lcore 83 as core 13 on socket 1 00:04:32.656 EAL: Detected lcore 84 as core 16 on socket 1 00:04:32.656 EAL: Detected lcore 85 as core 17 on socket 1 00:04:32.656 EAL: Detected lcore 86 as core 18 on socket 1 00:04:32.656 EAL: Detected lcore 87 as core 19 on socket 1 00:04:32.656 EAL: Detected lcore 88 as core 20 on socket 1 00:04:32.656 EAL: Detected lcore 89 as core 21 on socket 1 00:04:32.656 EAL: Detected lcore 90 as core 24 on socket 1 00:04:32.656 EAL: Detected lcore 91 as core 25 on socket 1 00:04:32.656 EAL: Detected lcore 92 as core 26 on socket 1 00:04:32.656 EAL: Detected lcore 93 as core 27 on socket 1 00:04:32.656 EAL: Detected lcore 94 as core 28 on socket 1 00:04:32.656 EAL: Detected lcore 95 as core 29 on socket 1 00:04:32.656 EAL: Maximum logical cores by configuration: 128 00:04:32.656 EAL: Detected CPU lcores: 96 00:04:32.656 EAL: Detected NUMA nodes: 2 00:04:32.656 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:32.656 EAL: Detected shared linkage of DPDK 00:04:32.656 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.656 EAL: Bus pci wants IOVA as 'DC' 00:04:32.656 EAL: Buses did not request a specific IOVA mode. 00:04:32.656 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:32.656 EAL: Selected IOVA mode 'VA' 00:04:32.656 EAL: Probing VFIO support... 00:04:32.656 EAL: IOMMU type 1 (Type 1) is supported 00:04:32.656 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:32.656 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:32.656 EAL: VFIO support initialized 00:04:32.656 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.656 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.656 EAL: Setting up physically contiguous memory... 00:04:32.656 EAL: Setting maximum number of open files to 524288 00:04:32.656 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.656 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:32.656 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.656 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.656 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.656 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.656 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.657 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.657 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.657 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.657 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.657 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.657 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.657 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.657 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.657 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.657 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.657 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.657 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:32.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.657 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:32.657 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.657 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:32.657 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:32.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.657 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:32.657 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.657 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:32.657 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:32.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.657 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:32.657 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.657 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:32.657 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:32.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.657 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:32.657 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.657 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:32.657 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:32.657 EAL: Hugepages will be freed exactly as allocated. 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: TSC frequency is ~2300000 KHz 00:04:32.657 EAL: Main lcore 0 is ready (tid=7faf159b7a00;cpuset=[0]) 00:04:32.657 EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 0 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:32.657 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.657 00:04:32.657 00:04:32.657 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.657 http://cunit.sourceforge.net/ 00:04:32.657 00:04:32.657 00:04:32.657 Suite: components_suite 00:04:32.657 Test: vtophys_malloc_test ...passed 00:04:32.657 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 4 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.657 EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 4 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.657 EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 4 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.657 EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 4 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.657 EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 4 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.657 EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 4 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 66MB 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was shrunk by 66MB 00:04:32.657 EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 4 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 130MB 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was shrunk by 130MB 00:04:32.657 EAL: Trying to obtain current memory policy. 00:04:32.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.657 EAL: Restoring previous memory policy: 4 00:04:32.657 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.657 EAL: request: mp_malloc_sync 00:04:32.657 EAL: No shared files mode enabled, IPC is disabled 00:04:32.657 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.915 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.915 EAL: request: mp_malloc_sync 00:04:32.915 EAL: No shared files mode enabled, IPC is disabled 00:04:32.915 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.915 EAL: Trying to obtain current memory policy. 00:04:32.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.915 EAL: Restoring previous memory policy: 4 00:04:32.915 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.915 EAL: request: mp_malloc_sync 00:04:32.915 EAL: No shared files mode enabled, IPC is disabled 00:04:32.915 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.915 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.173 EAL: request: mp_malloc_sync 00:04:33.173 EAL: No shared files mode enabled, IPC is disabled 00:04:33.173 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.173 EAL: Trying to obtain current memory policy. 00:04:33.173 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.431 EAL: Restoring previous memory policy: 4 00:04:33.431 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.431 EAL: request: mp_malloc_sync 00:04:33.431 EAL: No shared files mode enabled, IPC is disabled 00:04:33.431 EAL: Heap on socket 0 was expanded by 1026MB 00:04:33.431 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.689 EAL: request: mp_malloc_sync 00:04:33.689 EAL: No shared files mode enabled, IPC is disabled 00:04:33.689 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.689 passed 00:04:33.689 00:04:33.689 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.689 suites 1 1 n/a 0 0 00:04:33.689 tests 2 2 2 0 0 00:04:33.689 asserts 497 497 497 0 n/a 00:04:33.689 00:04:33.689 Elapsed time = 0.988 seconds 00:04:33.689 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.689 EAL: request: mp_malloc_sync 00:04:33.689 EAL: No shared files mode enabled, IPC is disabled 00:04:33.689 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.689 EAL: No shared files mode enabled, IPC is disabled 00:04:33.689 EAL: No shared files mode enabled, IPC is disabled 00:04:33.689 EAL: No shared files mode enabled, IPC is disabled 00:04:33.689 00:04:33.689 real 0m1.123s 00:04:33.689 user 0m0.649s 00:04:33.689 sys 0m0.443s 00:04:33.689 10:31:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.689 10:31:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.689 ************************************ 00:04:33.689 END TEST env_vtophys 00:04:33.689 ************************************ 00:04:33.689 10:31:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.689 10:31:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.689 10:31:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.689 10:31:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.689 ************************************ 00:04:33.689 START TEST env_pci 00:04:33.689 ************************************ 00:04:33.689 10:31:41 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.689 00:04:33.689 00:04:33.689 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.689 http://cunit.sourceforge.net/ 00:04:33.689 00:04:33.689 00:04:33.689 Suite: pci 00:04:33.689 Test: pci_hook ...[2024-11-19 10:31:41.060584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1494118 has claimed it 00:04:33.689 EAL: Cannot find device (10000:00:01.0) 00:04:33.689 EAL: Failed to attach device on primary process 00:04:33.689 passed 00:04:33.689 00:04:33.689 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.689 suites 1 1 n/a 0 0 00:04:33.689 tests 1 1 1 0 0 00:04:33.689 asserts 25 25 25 0 n/a 00:04:33.689 00:04:33.689 Elapsed time = 0.026 seconds 00:04:33.689 00:04:33.689 real 0m0.045s 00:04:33.689 user 0m0.013s 00:04:33.689 sys 0m0.032s 00:04:33.690 10:31:41 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.690 10:31:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.690 ************************************ 00:04:33.690 END TEST env_pci 00:04:33.690 ************************************ 00:04:33.690 10:31:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.690 10:31:41 env -- env/env.sh@15 -- # uname 00:04:33.690 10:31:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.690 10:31:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.690 10:31:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.690 10:31:41 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:33.690 10:31:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.690 10:31:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.948 ************************************ 00:04:33.948 START TEST env_dpdk_post_init 00:04:33.948 ************************************ 00:04:33.948 10:31:41 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.948 EAL: Detected CPU lcores: 96 00:04:33.948 EAL: Detected NUMA nodes: 2 00:04:33.948 EAL: Detected shared linkage of DPDK 00:04:33.948 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.948 EAL: Selected IOVA mode 'VA' 00:04:33.948 EAL: VFIO support initialized 00:04:33.948 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.948 EAL: Using IOMMU type 1 (Type 1) 00:04:33.948 EAL: Ignore mapping IO port bar(1) 00:04:33.948 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:33.948 EAL: Ignore mapping IO port bar(1) 00:04:33.948 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:33.948 EAL: Ignore mapping IO port bar(1) 00:04:33.948 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:33.948 EAL: Ignore mapping IO port bar(1) 00:04:33.948 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:33.948 EAL: Ignore mapping IO port bar(1) 00:04:33.948 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:33.948 EAL: Ignore mapping IO port bar(1) 00:04:33.948 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:33.948 EAL: Ignore mapping IO port bar(1) 00:04:33.948 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:33.948 EAL: Ignore mapping IO port bar(1) 00:04:33.948 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:34.885 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:34.885 EAL: Ignore mapping IO port bar(1) 00:04:34.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:34.885 EAL: Ignore mapping IO port bar(1) 00:04:34.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:34.885 EAL: Ignore mapping IO port bar(1) 00:04:34.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:34.885 EAL: Ignore mapping IO port bar(1) 00:04:34.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:34.885 EAL: Ignore mapping IO port bar(1) 00:04:34.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:34.885 EAL: Ignore mapping IO port bar(1) 00:04:34.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:34.885 EAL: Ignore mapping IO port bar(1) 00:04:34.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:34.885 EAL: Ignore mapping IO port bar(1) 00:04:34.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:38.174 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:38.174 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:38.174 Starting DPDK initialization... 00:04:38.174 Starting SPDK post initialization... 00:04:38.174 SPDK NVMe probe 00:04:38.174 Attaching to 0000:5e:00.0 00:04:38.174 Attached to 0000:5e:00.0 00:04:38.174 Cleaning up... 00:04:38.174 00:04:38.174 real 0m4.353s 00:04:38.174 user 0m2.981s 00:04:38.174 sys 0m0.448s 00:04:38.174 10:31:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.174 10:31:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.174 ************************************ 00:04:38.174 END TEST env_dpdk_post_init 00:04:38.174 ************************************ 00:04:38.174 10:31:45 env -- env/env.sh@26 -- # uname 00:04:38.174 10:31:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.174 10:31:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.174 10:31:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.174 10:31:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.174 10:31:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.174 ************************************ 00:04:38.174 START TEST env_mem_callbacks 00:04:38.174 ************************************ 00:04:38.174 10:31:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.174 EAL: Detected CPU lcores: 96 00:04:38.174 EAL: Detected NUMA nodes: 2 00:04:38.174 EAL: Detected shared linkage of DPDK 00:04:38.174 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.433 EAL: Selected IOVA mode 'VA' 00:04:38.433 EAL: VFIO support initialized 00:04:38.433 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.433 00:04:38.433 00:04:38.433 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.433 http://cunit.sourceforge.net/ 00:04:38.433 00:04:38.433 00:04:38.433 Suite: memory 00:04:38.433 Test: test ... 00:04:38.433 register 0x200000200000 2097152 00:04:38.433 malloc 3145728 00:04:38.433 register 0x200000400000 4194304 00:04:38.433 buf 0x200000500000 len 3145728 PASSED 00:04:38.433 malloc 64 00:04:38.433 buf 0x2000004fff40 len 64 PASSED 00:04:38.433 malloc 4194304 00:04:38.433 register 0x200000800000 6291456 00:04:38.433 buf 0x200000a00000 len 4194304 PASSED 00:04:38.433 free 0x200000500000 3145728 00:04:38.433 free 0x2000004fff40 64 00:04:38.433 unregister 0x200000400000 4194304 PASSED 00:04:38.433 free 0x200000a00000 4194304 00:04:38.433 unregister 0x200000800000 6291456 PASSED 00:04:38.433 malloc 8388608 00:04:38.433 register 0x200000400000 10485760 00:04:38.433 buf 0x200000600000 len 8388608 PASSED 00:04:38.433 free 0x200000600000 8388608 00:04:38.433 unregister 0x200000400000 10485760 PASSED 00:04:38.433 passed 00:04:38.433 00:04:38.433 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.433 suites 1 1 n/a 0 0 00:04:38.433 tests 1 1 1 0 0 00:04:38.433 asserts 15 15 15 0 n/a 00:04:38.433 00:04:38.433 Elapsed time = 0.006 seconds 00:04:38.433 00:04:38.433 real 0m0.055s 00:04:38.433 user 0m0.017s 00:04:38.433 sys 0m0.038s 00:04:38.433 10:31:45 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.433 10:31:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.433 ************************************ 00:04:38.433 END TEST env_mem_callbacks 00:04:38.433 ************************************ 00:04:38.433 00:04:38.433 real 0m6.271s 00:04:38.433 user 0m4.041s 00:04:38.433 sys 0m1.308s 00:04:38.433 10:31:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.433 10:31:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.433 ************************************ 00:04:38.433 END TEST env 00:04:38.433 ************************************ 00:04:38.433 10:31:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.433 10:31:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.433 10:31:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.433 10:31:45 -- common/autotest_common.sh@10 -- # set +x 00:04:38.433 ************************************ 00:04:38.433 START TEST rpc 00:04:38.433 ************************************ 00:04:38.433 10:31:45 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.433 * Looking for test storage... 00:04:38.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.433 10:31:45 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.433 10:31:45 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.433 10:31:45 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.692 10:31:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.692 10:31:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.692 10:31:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.692 10:31:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.692 10:31:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.692 10:31:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.692 10:31:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.692 10:31:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.692 10:31:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.692 10:31:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.692 10:31:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.692 10:31:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.692 10:31:45 rpc -- scripts/common.sh@345 -- # : 1 00:04:38.692 10:31:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.692 10:31:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.692 10:31:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.692 10:31:45 rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.692 10:31:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.692 10:31:45 rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.692 10:31:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.692 10:31:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.692 10:31:45 rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.692 10:31:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.692 10:31:45 rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.692 10:31:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.692 10:31:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.692 10:31:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.692 10:31:45 rpc -- scripts/common.sh@368 -- # return 0 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.692 --rc genhtml_branch_coverage=1 00:04:38.692 --rc genhtml_function_coverage=1 00:04:38.692 --rc genhtml_legend=1 00:04:38.692 --rc geninfo_all_blocks=1 00:04:38.692 --rc geninfo_unexecuted_blocks=1 00:04:38.692 00:04:38.692 ' 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.692 --rc genhtml_branch_coverage=1 00:04:38.692 --rc genhtml_function_coverage=1 00:04:38.692 --rc genhtml_legend=1 00:04:38.692 --rc geninfo_all_blocks=1 00:04:38.692 --rc geninfo_unexecuted_blocks=1 00:04:38.692 00:04:38.692 ' 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.692 --rc genhtml_branch_coverage=1 00:04:38.692 --rc genhtml_function_coverage=1 00:04:38.692 --rc genhtml_legend=1 00:04:38.692 --rc geninfo_all_blocks=1 00:04:38.692 --rc geninfo_unexecuted_blocks=1 00:04:38.692 00:04:38.692 ' 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.692 --rc genhtml_branch_coverage=1 00:04:38.692 --rc genhtml_function_coverage=1 00:04:38.692 --rc genhtml_legend=1 00:04:38.692 --rc geninfo_all_blocks=1 00:04:38.692 --rc geninfo_unexecuted_blocks=1 00:04:38.692 00:04:38.692 ' 00:04:38.692 10:31:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1495044 00:04:38.692 10:31:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.692 10:31:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:38.692 10:31:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1495044 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 1495044 ']' 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.692 10:31:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.692 [2024-11-19 10:31:45.984914] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:04:38.692 [2024-11-19 10:31:45.984965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495044 ] 00:04:38.692 [2024-11-19 10:31:46.056559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.692 [2024-11-19 10:31:46.098665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.692 [2024-11-19 10:31:46.098703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1495044' to capture a snapshot of events at runtime. 00:04:38.692 [2024-11-19 10:31:46.098710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.692 [2024-11-19 10:31:46.098718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.692 [2024-11-19 10:31:46.098723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1495044 for offline analysis/debug. 00:04:38.692 [2024-11-19 10:31:46.099281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.951 10:31:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.951 10:31:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:38.951 10:31:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.951 10:31:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.951 10:31:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:38.951 10:31:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:38.951 10:31:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.951 10:31:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.951 10:31:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.951 ************************************ 00:04:38.951 START TEST rpc_integrity 00:04:38.951 ************************************ 00:04:38.951 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:38.951 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.951 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.951 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.951 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.951 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.951 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.211 { 00:04:39.211 "name": "Malloc0", 00:04:39.211 "aliases": [ 00:04:39.211 "26fd1253-8445-4408-afdc-ab894cd4b203" 00:04:39.211 ], 00:04:39.211 "product_name": "Malloc disk", 00:04:39.211 "block_size": 512, 00:04:39.211 "num_blocks": 16384, 00:04:39.211 "uuid": "26fd1253-8445-4408-afdc-ab894cd4b203", 00:04:39.211 "assigned_rate_limits": { 00:04:39.211 "rw_ios_per_sec": 0, 00:04:39.211 "rw_mbytes_per_sec": 0, 00:04:39.211 "r_mbytes_per_sec": 0, 00:04:39.211 "w_mbytes_per_sec": 0 00:04:39.211 }, 00:04:39.211 "claimed": false, 00:04:39.211 "zoned": false, 00:04:39.211 "supported_io_types": { 00:04:39.211 "read": true, 00:04:39.211 "write": true, 00:04:39.211 "unmap": true, 00:04:39.211 "flush": true, 00:04:39.211 "reset": true, 00:04:39.211 "nvme_admin": false, 00:04:39.211 "nvme_io": false, 00:04:39.211 "nvme_io_md": false, 00:04:39.211 "write_zeroes": true, 00:04:39.211 "zcopy": true, 00:04:39.211 "get_zone_info": false, 00:04:39.211 "zone_management": false, 00:04:39.211 "zone_append": false, 00:04:39.211 "compare": false, 00:04:39.211 "compare_and_write": false, 00:04:39.211 "abort": true, 00:04:39.211 "seek_hole": false, 00:04:39.211 "seek_data": false, 00:04:39.211 "copy": true, 00:04:39.211 "nvme_iov_md": false 00:04:39.211 }, 00:04:39.211 "memory_domains": [ 00:04:39.211 { 00:04:39.211 "dma_device_id": "system", 00:04:39.211 "dma_device_type": 1 00:04:39.211 }, 00:04:39.211 { 00:04:39.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.211 "dma_device_type": 2 00:04:39.211 } 00:04:39.211 ], 00:04:39.211 "driver_specific": {} 00:04:39.211 } 00:04:39.211 ]' 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.211 [2024-11-19 10:31:46.480736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.211 [2024-11-19 10:31:46.480764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.211 [2024-11-19 10:31:46.480777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14e86e0 00:04:39.211 [2024-11-19 10:31:46.480783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.211 [2024-11-19 10:31:46.481895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.211 [2024-11-19 10:31:46.481915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.211 Passthru0 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.211 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.211 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.211 { 00:04:39.211 "name": "Malloc0", 00:04:39.211 "aliases": [ 00:04:39.211 "26fd1253-8445-4408-afdc-ab894cd4b203" 00:04:39.211 ], 00:04:39.211 "product_name": "Malloc disk", 00:04:39.211 "block_size": 512, 00:04:39.211 "num_blocks": 16384, 00:04:39.211 "uuid": "26fd1253-8445-4408-afdc-ab894cd4b203", 00:04:39.211 "assigned_rate_limits": { 00:04:39.211 "rw_ios_per_sec": 0, 00:04:39.211 "rw_mbytes_per_sec": 0, 00:04:39.211 "r_mbytes_per_sec": 0, 00:04:39.211 "w_mbytes_per_sec": 0 00:04:39.211 }, 00:04:39.211 "claimed": true, 00:04:39.211 "claim_type": "exclusive_write", 00:04:39.211 "zoned": false, 00:04:39.211 "supported_io_types": { 00:04:39.211 "read": true, 00:04:39.211 "write": true, 00:04:39.211 "unmap": true, 00:04:39.211 "flush": true, 00:04:39.211 "reset": true, 00:04:39.211 "nvme_admin": false, 00:04:39.211 "nvme_io": false, 00:04:39.211 "nvme_io_md": false, 00:04:39.211 "write_zeroes": true, 00:04:39.211 "zcopy": true, 00:04:39.211 "get_zone_info": false, 00:04:39.211 "zone_management": false, 00:04:39.211 "zone_append": false, 00:04:39.211 "compare": false, 00:04:39.211 "compare_and_write": false, 00:04:39.211 "abort": true, 00:04:39.211 "seek_hole": false, 00:04:39.211 "seek_data": false, 00:04:39.211 "copy": true, 00:04:39.211 "nvme_iov_md": false 00:04:39.211 }, 00:04:39.211 "memory_domains": [ 00:04:39.211 { 00:04:39.211 "dma_device_id": "system", 00:04:39.211 "dma_device_type": 1 00:04:39.211 }, 00:04:39.211 { 00:04:39.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.211 "dma_device_type": 2 00:04:39.211 } 00:04:39.211 ], 00:04:39.211 "driver_specific": {} 00:04:39.211 }, 00:04:39.211 { 00:04:39.211 "name": "Passthru0", 00:04:39.211 "aliases": [ 00:04:39.211 "79bc89f4-06da-5601-a710-9b9137598c31" 00:04:39.211 ], 00:04:39.211 "product_name": "passthru", 00:04:39.211 "block_size": 512, 00:04:39.211 "num_blocks": 16384, 00:04:39.211 "uuid": "79bc89f4-06da-5601-a710-9b9137598c31", 00:04:39.211 "assigned_rate_limits": { 00:04:39.211 "rw_ios_per_sec": 0, 00:04:39.211 "rw_mbytes_per_sec": 0, 00:04:39.211 "r_mbytes_per_sec": 0, 00:04:39.211 "w_mbytes_per_sec": 0 00:04:39.211 }, 00:04:39.211 "claimed": false, 00:04:39.211 "zoned": false, 00:04:39.211 "supported_io_types": { 00:04:39.211 "read": true, 00:04:39.211 "write": true, 00:04:39.211 "unmap": true, 00:04:39.211 "flush": true, 00:04:39.211 "reset": true, 00:04:39.211 "nvme_admin": false, 00:04:39.211 "nvme_io": false, 00:04:39.211 "nvme_io_md": false, 00:04:39.211 "write_zeroes": true, 00:04:39.211 "zcopy": true, 00:04:39.211 "get_zone_info": false, 00:04:39.212 "zone_management": false, 00:04:39.212 "zone_append": false, 00:04:39.212 "compare": false, 00:04:39.212 "compare_and_write": false, 00:04:39.212 "abort": true, 00:04:39.212 "seek_hole": false, 00:04:39.212 "seek_data": false, 00:04:39.212 "copy": true, 00:04:39.212 "nvme_iov_md": false 00:04:39.212 }, 00:04:39.212 "memory_domains": [ 00:04:39.212 { 00:04:39.212 "dma_device_id": "system", 00:04:39.212 "dma_device_type": 1 00:04:39.212 }, 00:04:39.212 { 00:04:39.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.212 "dma_device_type": 2 00:04:39.212 } 00:04:39.212 ], 00:04:39.212 "driver_specific": { 00:04:39.212 "passthru": { 00:04:39.212 "name": "Passthru0", 00:04:39.212 "base_bdev_name": "Malloc0" 00:04:39.212 } 00:04:39.212 } 00:04:39.212 } 00:04:39.212 ]' 00:04:39.212 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.212 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.212 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.212 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.212 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.212 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.212 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.212 10:31:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.212 00:04:39.212 real 0m0.275s 00:04:39.212 user 0m0.175s 00:04:39.212 sys 0m0.038s 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.212 10:31:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.212 ************************************ 00:04:39.212 END TEST rpc_integrity 00:04:39.212 ************************************ 00:04:39.212 10:31:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.212 10:31:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.212 10:31:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.212 10:31:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 ************************************ 00:04:39.472 START TEST rpc_plugins 00:04:39.472 ************************************ 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:39.472 { 00:04:39.472 "name": "Malloc1", 00:04:39.472 "aliases": [ 00:04:39.472 "660a3d5d-45c0-421f-bfb2-719a718b525e" 00:04:39.472 ], 00:04:39.472 "product_name": "Malloc disk", 00:04:39.472 "block_size": 4096, 00:04:39.472 "num_blocks": 256, 00:04:39.472 "uuid": "660a3d5d-45c0-421f-bfb2-719a718b525e", 00:04:39.472 "assigned_rate_limits": { 00:04:39.472 "rw_ios_per_sec": 0, 00:04:39.472 "rw_mbytes_per_sec": 0, 00:04:39.472 "r_mbytes_per_sec": 0, 00:04:39.472 "w_mbytes_per_sec": 0 00:04:39.472 }, 00:04:39.472 "claimed": false, 00:04:39.472 "zoned": false, 00:04:39.472 "supported_io_types": { 00:04:39.472 "read": true, 00:04:39.472 "write": true, 00:04:39.472 "unmap": true, 00:04:39.472 "flush": true, 00:04:39.472 "reset": true, 00:04:39.472 "nvme_admin": false, 00:04:39.472 "nvme_io": false, 00:04:39.472 "nvme_io_md": false, 00:04:39.472 "write_zeroes": true, 00:04:39.472 "zcopy": true, 00:04:39.472 "get_zone_info": false, 00:04:39.472 "zone_management": false, 00:04:39.472 "zone_append": false, 00:04:39.472 "compare": false, 00:04:39.472 "compare_and_write": false, 00:04:39.472 "abort": true, 00:04:39.472 "seek_hole": false, 00:04:39.472 "seek_data": false, 00:04:39.472 "copy": true, 00:04:39.472 "nvme_iov_md": false 00:04:39.472 }, 00:04:39.472 "memory_domains": [ 00:04:39.472 { 00:04:39.472 "dma_device_id": "system", 00:04:39.472 "dma_device_type": 1 00:04:39.472 }, 00:04:39.472 { 00:04:39.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.472 "dma_device_type": 2 00:04:39.472 } 00:04:39.472 ], 00:04:39.472 "driver_specific": {} 00:04:39.472 } 00:04:39.472 ]' 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:39.472 10:31:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:39.472 00:04:39.472 real 0m0.150s 00:04:39.472 user 0m0.090s 00:04:39.472 sys 0m0.020s 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.472 10:31:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 ************************************ 00:04:39.472 END TEST rpc_plugins 00:04:39.472 ************************************ 00:04:39.472 10:31:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:39.472 10:31:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.472 10:31:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.472 10:31:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 ************************************ 00:04:39.472 START TEST rpc_trace_cmd_test 00:04:39.472 ************************************ 00:04:39.472 10:31:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:39.472 10:31:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:39.472 10:31:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:39.472 10:31:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.472 10:31:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.731 10:31:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.731 10:31:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:39.731 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1495044", 00:04:39.731 "tpoint_group_mask": "0x8", 00:04:39.731 "iscsi_conn": { 00:04:39.731 "mask": "0x2", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "scsi": { 00:04:39.731 "mask": "0x4", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "bdev": { 00:04:39.731 "mask": "0x8", 00:04:39.731 "tpoint_mask": "0xffffffffffffffff" 00:04:39.731 }, 00:04:39.731 "nvmf_rdma": { 00:04:39.731 "mask": "0x10", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "nvmf_tcp": { 00:04:39.731 "mask": "0x20", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "ftl": { 00:04:39.731 "mask": "0x40", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "blobfs": { 00:04:39.731 "mask": "0x80", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "dsa": { 00:04:39.731 "mask": "0x200", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "thread": { 00:04:39.731 "mask": "0x400", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "nvme_pcie": { 00:04:39.731 "mask": "0x800", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "iaa": { 00:04:39.731 "mask": "0x1000", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "nvme_tcp": { 00:04:39.731 "mask": "0x2000", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "bdev_nvme": { 00:04:39.731 "mask": "0x4000", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "sock": { 00:04:39.731 "mask": "0x8000", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "blob": { 00:04:39.731 "mask": "0x10000", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "bdev_raid": { 00:04:39.731 "mask": "0x20000", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 }, 00:04:39.731 "scheduler": { 00:04:39.731 "mask": "0x40000", 00:04:39.731 "tpoint_mask": "0x0" 00:04:39.731 } 00:04:39.731 }' 00:04:39.731 10:31:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:39.731 10:31:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:39.731 10:31:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.731 10:31:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.731 10:31:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.731 10:31:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.732 10:31:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.732 10:31:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.732 10:31:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.732 10:31:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.732 00:04:39.732 real 0m0.214s 00:04:39.732 user 0m0.179s 00:04:39.732 sys 0m0.025s 00:04:39.732 10:31:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.732 10:31:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.732 ************************************ 00:04:39.732 END TEST rpc_trace_cmd_test 00:04:39.732 ************************************ 00:04:39.732 10:31:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.732 10:31:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.732 10:31:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.732 10:31:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.732 10:31:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.732 10:31:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 ************************************ 00:04:39.992 START TEST rpc_daemon_integrity 00:04:39.992 ************************************ 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.992 { 00:04:39.992 "name": "Malloc2", 00:04:39.992 "aliases": [ 00:04:39.992 "28c6c8e2-311b-4192-bd82-93fae21430cd" 00:04:39.992 ], 00:04:39.992 "product_name": "Malloc disk", 00:04:39.992 "block_size": 512, 00:04:39.992 "num_blocks": 16384, 00:04:39.992 "uuid": "28c6c8e2-311b-4192-bd82-93fae21430cd", 00:04:39.992 "assigned_rate_limits": { 00:04:39.992 "rw_ios_per_sec": 0, 00:04:39.992 "rw_mbytes_per_sec": 0, 00:04:39.992 "r_mbytes_per_sec": 0, 00:04:39.992 "w_mbytes_per_sec": 0 00:04:39.992 }, 00:04:39.992 "claimed": false, 00:04:39.992 "zoned": false, 00:04:39.992 "supported_io_types": { 00:04:39.992 "read": true, 00:04:39.992 "write": true, 00:04:39.992 "unmap": true, 00:04:39.992 "flush": true, 00:04:39.992 "reset": true, 00:04:39.992 "nvme_admin": false, 00:04:39.992 "nvme_io": false, 00:04:39.992 "nvme_io_md": false, 00:04:39.992 "write_zeroes": true, 00:04:39.992 "zcopy": true, 00:04:39.992 "get_zone_info": false, 00:04:39.992 "zone_management": false, 00:04:39.992 "zone_append": false, 00:04:39.992 "compare": false, 00:04:39.992 "compare_and_write": false, 00:04:39.992 "abort": true, 00:04:39.992 "seek_hole": false, 00:04:39.992 "seek_data": false, 00:04:39.992 "copy": true, 00:04:39.992 "nvme_iov_md": false 00:04:39.992 }, 00:04:39.992 "memory_domains": [ 00:04:39.992 { 00:04:39.992 "dma_device_id": "system", 00:04:39.992 "dma_device_type": 1 00:04:39.992 }, 00:04:39.992 { 00:04:39.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.992 "dma_device_type": 2 00:04:39.992 } 00:04:39.992 ], 00:04:39.992 "driver_specific": {} 00:04:39.992 } 00:04:39.992 ]' 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 [2024-11-19 10:31:47.323056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.992 [2024-11-19 10:31:47.323083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.992 [2024-11-19 10:31:47.323095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1578b70 00:04:39.992 [2024-11-19 10:31:47.323101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.992 [2024-11-19 10:31:47.324104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.992 [2024-11-19 10:31:47.324124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.992 Passthru0 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.992 { 00:04:39.992 "name": "Malloc2", 00:04:39.992 "aliases": [ 00:04:39.992 "28c6c8e2-311b-4192-bd82-93fae21430cd" 00:04:39.992 ], 00:04:39.992 "product_name": "Malloc disk", 00:04:39.992 "block_size": 512, 00:04:39.992 "num_blocks": 16384, 00:04:39.992 "uuid": "28c6c8e2-311b-4192-bd82-93fae21430cd", 00:04:39.992 "assigned_rate_limits": { 00:04:39.992 "rw_ios_per_sec": 0, 00:04:39.992 "rw_mbytes_per_sec": 0, 00:04:39.992 "r_mbytes_per_sec": 0, 00:04:39.992 "w_mbytes_per_sec": 0 00:04:39.992 }, 00:04:39.992 "claimed": true, 00:04:39.992 "claim_type": "exclusive_write", 00:04:39.992 "zoned": false, 00:04:39.992 "supported_io_types": { 00:04:39.992 "read": true, 00:04:39.992 "write": true, 00:04:39.992 "unmap": true, 00:04:39.992 "flush": true, 00:04:39.992 "reset": true, 00:04:39.992 "nvme_admin": false, 00:04:39.992 "nvme_io": false, 00:04:39.992 "nvme_io_md": false, 00:04:39.992 "write_zeroes": true, 00:04:39.992 "zcopy": true, 00:04:39.992 "get_zone_info": false, 00:04:39.992 "zone_management": false, 00:04:39.992 "zone_append": false, 00:04:39.992 "compare": false, 00:04:39.992 "compare_and_write": false, 00:04:39.992 "abort": true, 00:04:39.992 "seek_hole": false, 00:04:39.992 "seek_data": false, 00:04:39.992 "copy": true, 00:04:39.992 "nvme_iov_md": false 00:04:39.992 }, 00:04:39.992 "memory_domains": [ 00:04:39.992 { 00:04:39.992 "dma_device_id": "system", 00:04:39.992 "dma_device_type": 1 00:04:39.992 }, 00:04:39.992 { 00:04:39.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.992 "dma_device_type": 2 00:04:39.992 } 00:04:39.992 ], 00:04:39.992 "driver_specific": {} 00:04:39.992 }, 00:04:39.992 { 00:04:39.992 "name": "Passthru0", 00:04:39.992 "aliases": [ 00:04:39.992 "df1654b5-5631-5267-9280-bec2cd55b872" 00:04:39.992 ], 00:04:39.992 "product_name": "passthru", 00:04:39.992 "block_size": 512, 00:04:39.992 "num_blocks": 16384, 00:04:39.992 "uuid": "df1654b5-5631-5267-9280-bec2cd55b872", 00:04:39.992 "assigned_rate_limits": { 00:04:39.992 "rw_ios_per_sec": 0, 00:04:39.992 "rw_mbytes_per_sec": 0, 00:04:39.992 "r_mbytes_per_sec": 0, 00:04:39.992 "w_mbytes_per_sec": 0 00:04:39.992 }, 00:04:39.992 "claimed": false, 00:04:39.992 "zoned": false, 00:04:39.992 "supported_io_types": { 00:04:39.992 "read": true, 00:04:39.992 "write": true, 00:04:39.992 "unmap": true, 00:04:39.992 "flush": true, 00:04:39.992 "reset": true, 00:04:39.992 "nvme_admin": false, 00:04:39.992 "nvme_io": false, 00:04:39.992 "nvme_io_md": false, 00:04:39.992 "write_zeroes": true, 00:04:39.992 "zcopy": true, 00:04:39.992 "get_zone_info": false, 00:04:39.992 "zone_management": false, 00:04:39.992 "zone_append": false, 00:04:39.992 "compare": false, 00:04:39.992 "compare_and_write": false, 00:04:39.992 "abort": true, 00:04:39.992 "seek_hole": false, 00:04:39.992 "seek_data": false, 00:04:39.992 "copy": true, 00:04:39.992 "nvme_iov_md": false 00:04:39.992 }, 00:04:39.992 "memory_domains": [ 00:04:39.992 { 00:04:39.992 "dma_device_id": "system", 00:04:39.992 "dma_device_type": 1 00:04:39.992 }, 00:04:39.992 { 00:04:39.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.992 "dma_device_type": 2 00:04:39.992 } 00:04:39.992 ], 00:04:39.992 "driver_specific": { 00:04:39.992 "passthru": { 00:04:39.992 "name": "Passthru0", 00:04:39.992 "base_bdev_name": "Malloc2" 00:04:39.992 } 00:04:39.992 } 00:04:39.992 } 00:04:39.992 ]' 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.993 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.252 10:31:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.252 00:04:40.252 real 0m0.277s 00:04:40.252 user 0m0.178s 00:04:40.252 sys 0m0.035s 00:04:40.252 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.252 10:31:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.252 ************************************ 00:04:40.252 END TEST rpc_daemon_integrity 00:04:40.252 ************************************ 00:04:40.252 10:31:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.252 10:31:47 rpc -- rpc/rpc.sh@84 -- # killprocess 1495044 00:04:40.252 10:31:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 1495044 ']' 00:04:40.252 10:31:47 rpc -- common/autotest_common.sh@958 -- # kill -0 1495044 00:04:40.252 10:31:47 rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.252 10:31:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.252 10:31:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495044 00:04:40.253 10:31:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.253 10:31:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.253 10:31:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495044' 00:04:40.253 killing process with pid 1495044 00:04:40.253 10:31:47 rpc -- common/autotest_common.sh@973 -- # kill 1495044 00:04:40.253 10:31:47 rpc -- common/autotest_common.sh@978 -- # wait 1495044 00:04:40.512 00:04:40.512 real 0m2.103s 00:04:40.512 user 0m2.688s 00:04:40.512 sys 0m0.699s 00:04:40.512 10:31:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.512 10:31:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.512 ************************************ 00:04:40.512 END TEST rpc 00:04:40.512 ************************************ 00:04:40.512 10:31:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.512 10:31:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.512 10:31:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.512 10:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:40.512 ************************************ 00:04:40.512 START TEST skip_rpc 00:04:40.512 ************************************ 00:04:40.512 10:31:47 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.772 * Looking for test storage... 00:04:40.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.772 10:31:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.772 --rc genhtml_branch_coverage=1 00:04:40.772 --rc genhtml_function_coverage=1 00:04:40.772 --rc genhtml_legend=1 00:04:40.772 --rc geninfo_all_blocks=1 00:04:40.772 --rc geninfo_unexecuted_blocks=1 00:04:40.772 00:04:40.772 ' 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.772 --rc genhtml_branch_coverage=1 00:04:40.772 --rc genhtml_function_coverage=1 00:04:40.772 --rc genhtml_legend=1 00:04:40.772 --rc geninfo_all_blocks=1 00:04:40.772 --rc geninfo_unexecuted_blocks=1 00:04:40.772 00:04:40.772 ' 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.772 --rc genhtml_branch_coverage=1 00:04:40.772 --rc genhtml_function_coverage=1 00:04:40.772 --rc genhtml_legend=1 00:04:40.772 --rc geninfo_all_blocks=1 00:04:40.772 --rc geninfo_unexecuted_blocks=1 00:04:40.772 00:04:40.772 ' 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.772 --rc genhtml_branch_coverage=1 00:04:40.772 --rc genhtml_function_coverage=1 00:04:40.772 --rc genhtml_legend=1 00:04:40.772 --rc geninfo_all_blocks=1 00:04:40.772 --rc geninfo_unexecuted_blocks=1 00:04:40.772 00:04:40.772 ' 00:04:40.772 10:31:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.772 10:31:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.772 10:31:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.772 10:31:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.772 ************************************ 00:04:40.772 START TEST skip_rpc 00:04:40.772 ************************************ 00:04:40.772 10:31:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:40.772 10:31:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1495679 00:04:40.772 10:31:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.772 10:31:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.772 10:31:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.772 [2024-11-19 10:31:48.202389] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:04:40.773 [2024-11-19 10:31:48.202429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495679 ] 00:04:41.032 [2024-11-19 10:31:48.278446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.032 [2024-11-19 10:31:48.318753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1495679 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1495679 ']' 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1495679 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.306 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.307 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495679 00:04:46.307 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.307 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.307 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495679' 00:04:46.307 killing process with pid 1495679 00:04:46.307 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1495679 00:04:46.307 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1495679 00:04:46.307 00:04:46.307 real 0m5.371s 00:04:46.307 user 0m5.126s 00:04:46.307 sys 0m0.287s 00:04:46.307 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.307 10:31:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.307 ************************************ 00:04:46.307 END TEST skip_rpc 00:04:46.307 ************************************ 00:04:46.307 10:31:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.307 10:31:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.307 10:31:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.307 10:31:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.307 ************************************ 00:04:46.307 START TEST skip_rpc_with_json 00:04:46.307 ************************************ 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1496623 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1496623 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1496623 ']' 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.307 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.307 [2024-11-19 10:31:53.643976] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:04:46.307 [2024-11-19 10:31:53.644021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496623 ] 00:04:46.307 [2024-11-19 10:31:53.716769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.566 [2024-11-19 10:31:53.756169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.566 [2024-11-19 10:31:53.983437] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:46.566 request: 00:04:46.566 { 00:04:46.566 "trtype": "tcp", 00:04:46.566 "method": "nvmf_get_transports", 00:04:46.566 "req_id": 1 00:04:46.566 } 00:04:46.566 Got JSON-RPC error response 00:04:46.566 response: 00:04:46.566 { 00:04:46.566 "code": -19, 00:04:46.566 "message": "No such device" 00:04:46.566 } 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.566 [2024-11-19 10:31:53.995531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.566 10:31:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.825 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.825 10:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.825 { 00:04:46.825 "subsystems": [ 00:04:46.825 { 00:04:46.825 "subsystem": "fsdev", 00:04:46.825 "config": [ 00:04:46.825 { 00:04:46.825 "method": "fsdev_set_opts", 00:04:46.826 "params": { 00:04:46.826 "fsdev_io_pool_size": 65535, 00:04:46.826 "fsdev_io_cache_size": 256 00:04:46.826 } 00:04:46.826 } 00:04:46.826 ] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "vfio_user_target", 00:04:46.826 "config": null 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "keyring", 00:04:46.826 "config": [] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "iobuf", 00:04:46.826 "config": [ 00:04:46.826 { 00:04:46.826 "method": "iobuf_set_options", 00:04:46.826 "params": { 00:04:46.826 "small_pool_count": 8192, 00:04:46.826 "large_pool_count": 1024, 00:04:46.826 "small_bufsize": 8192, 00:04:46.826 "large_bufsize": 135168, 00:04:46.826 "enable_numa": false 00:04:46.826 } 00:04:46.826 } 00:04:46.826 ] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "sock", 00:04:46.826 "config": [ 00:04:46.826 { 00:04:46.826 "method": "sock_set_default_impl", 00:04:46.826 "params": { 00:04:46.826 "impl_name": "posix" 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "sock_impl_set_options", 00:04:46.826 "params": { 00:04:46.826 "impl_name": "ssl", 00:04:46.826 "recv_buf_size": 4096, 00:04:46.826 "send_buf_size": 4096, 00:04:46.826 "enable_recv_pipe": true, 00:04:46.826 "enable_quickack": false, 00:04:46.826 "enable_placement_id": 0, 00:04:46.826 "enable_zerocopy_send_server": true, 00:04:46.826 "enable_zerocopy_send_client": false, 00:04:46.826 "zerocopy_threshold": 0, 00:04:46.826 "tls_version": 0, 00:04:46.826 "enable_ktls": false 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "sock_impl_set_options", 00:04:46.826 "params": { 00:04:46.826 "impl_name": "posix", 00:04:46.826 "recv_buf_size": 2097152, 00:04:46.826 "send_buf_size": 2097152, 00:04:46.826 "enable_recv_pipe": true, 00:04:46.826 "enable_quickack": false, 00:04:46.826 "enable_placement_id": 0, 00:04:46.826 "enable_zerocopy_send_server": true, 00:04:46.826 "enable_zerocopy_send_client": false, 00:04:46.826 "zerocopy_threshold": 0, 00:04:46.826 "tls_version": 0, 00:04:46.826 "enable_ktls": false 00:04:46.826 } 00:04:46.826 } 00:04:46.826 ] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "vmd", 00:04:46.826 "config": [] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "accel", 00:04:46.826 "config": [ 00:04:46.826 { 00:04:46.826 "method": "accel_set_options", 00:04:46.826 "params": { 00:04:46.826 "small_cache_size": 128, 00:04:46.826 "large_cache_size": 16, 00:04:46.826 "task_count": 2048, 00:04:46.826 "sequence_count": 2048, 00:04:46.826 "buf_count": 2048 00:04:46.826 } 00:04:46.826 } 00:04:46.826 ] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "bdev", 00:04:46.826 "config": [ 00:04:46.826 { 00:04:46.826 "method": "bdev_set_options", 00:04:46.826 "params": { 00:04:46.826 "bdev_io_pool_size": 65535, 00:04:46.826 "bdev_io_cache_size": 256, 00:04:46.826 "bdev_auto_examine": true, 00:04:46.826 "iobuf_small_cache_size": 128, 00:04:46.826 "iobuf_large_cache_size": 16 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "bdev_raid_set_options", 00:04:46.826 "params": { 00:04:46.826 "process_window_size_kb": 1024, 00:04:46.826 "process_max_bandwidth_mb_sec": 0 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "bdev_iscsi_set_options", 00:04:46.826 "params": { 00:04:46.826 "timeout_sec": 30 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "bdev_nvme_set_options", 00:04:46.826 "params": { 00:04:46.826 "action_on_timeout": "none", 00:04:46.826 "timeout_us": 0, 00:04:46.826 "timeout_admin_us": 0, 00:04:46.826 "keep_alive_timeout_ms": 10000, 00:04:46.826 "arbitration_burst": 0, 00:04:46.826 "low_priority_weight": 0, 00:04:46.826 "medium_priority_weight": 0, 00:04:46.826 "high_priority_weight": 0, 00:04:46.826 "nvme_adminq_poll_period_us": 10000, 00:04:46.826 "nvme_ioq_poll_period_us": 0, 00:04:46.826 "io_queue_requests": 0, 00:04:46.826 "delay_cmd_submit": true, 00:04:46.826 "transport_retry_count": 4, 00:04:46.826 "bdev_retry_count": 3, 00:04:46.826 "transport_ack_timeout": 0, 00:04:46.826 "ctrlr_loss_timeout_sec": 0, 00:04:46.826 "reconnect_delay_sec": 0, 00:04:46.826 "fast_io_fail_timeout_sec": 0, 00:04:46.826 "disable_auto_failback": false, 00:04:46.826 "generate_uuids": false, 00:04:46.826 "transport_tos": 0, 00:04:46.826 "nvme_error_stat": false, 00:04:46.826 "rdma_srq_size": 0, 00:04:46.826 "io_path_stat": false, 00:04:46.826 "allow_accel_sequence": false, 00:04:46.826 "rdma_max_cq_size": 0, 00:04:46.826 "rdma_cm_event_timeout_ms": 0, 00:04:46.826 "dhchap_digests": [ 00:04:46.826 "sha256", 00:04:46.826 "sha384", 00:04:46.826 "sha512" 00:04:46.826 ], 00:04:46.826 "dhchap_dhgroups": [ 00:04:46.826 "null", 00:04:46.826 "ffdhe2048", 00:04:46.826 "ffdhe3072", 00:04:46.826 "ffdhe4096", 00:04:46.826 "ffdhe6144", 00:04:46.826 "ffdhe8192" 00:04:46.826 ] 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "bdev_nvme_set_hotplug", 00:04:46.826 "params": { 00:04:46.826 "period_us": 100000, 00:04:46.826 "enable": false 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "bdev_wait_for_examine" 00:04:46.826 } 00:04:46.826 ] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "scsi", 00:04:46.826 "config": null 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "scheduler", 00:04:46.826 "config": [ 00:04:46.826 { 00:04:46.826 "method": "framework_set_scheduler", 00:04:46.826 "params": { 00:04:46.826 "name": "static" 00:04:46.826 } 00:04:46.826 } 00:04:46.826 ] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "vhost_scsi", 00:04:46.826 "config": [] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "vhost_blk", 00:04:46.826 "config": [] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "ublk", 00:04:46.826 "config": [] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "nbd", 00:04:46.826 "config": [] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "nvmf", 00:04:46.826 "config": [ 00:04:46.826 { 00:04:46.826 "method": "nvmf_set_config", 00:04:46.826 "params": { 00:04:46.826 "discovery_filter": "match_any", 00:04:46.826 "admin_cmd_passthru": { 00:04:46.826 "identify_ctrlr": false 00:04:46.826 }, 00:04:46.826 "dhchap_digests": [ 00:04:46.826 "sha256", 00:04:46.826 "sha384", 00:04:46.826 "sha512" 00:04:46.826 ], 00:04:46.826 "dhchap_dhgroups": [ 00:04:46.826 "null", 00:04:46.826 "ffdhe2048", 00:04:46.826 "ffdhe3072", 00:04:46.826 "ffdhe4096", 00:04:46.826 "ffdhe6144", 00:04:46.826 "ffdhe8192" 00:04:46.826 ] 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "nvmf_set_max_subsystems", 00:04:46.826 "params": { 00:04:46.826 "max_subsystems": 1024 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "nvmf_set_crdt", 00:04:46.826 "params": { 00:04:46.826 "crdt1": 0, 00:04:46.826 "crdt2": 0, 00:04:46.826 "crdt3": 0 00:04:46.826 } 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "method": "nvmf_create_transport", 00:04:46.826 "params": { 00:04:46.826 "trtype": "TCP", 00:04:46.826 "max_queue_depth": 128, 00:04:46.826 "max_io_qpairs_per_ctrlr": 127, 00:04:46.826 "in_capsule_data_size": 4096, 00:04:46.826 "max_io_size": 131072, 00:04:46.826 "io_unit_size": 131072, 00:04:46.826 "max_aq_depth": 128, 00:04:46.826 "num_shared_buffers": 511, 00:04:46.826 "buf_cache_size": 4294967295, 00:04:46.826 "dif_insert_or_strip": false, 00:04:46.826 "zcopy": false, 00:04:46.826 "c2h_success": true, 00:04:46.826 "sock_priority": 0, 00:04:46.826 "abort_timeout_sec": 1, 00:04:46.826 "ack_timeout": 0, 00:04:46.826 "data_wr_pool_size": 0 00:04:46.826 } 00:04:46.826 } 00:04:46.826 ] 00:04:46.826 }, 00:04:46.826 { 00:04:46.826 "subsystem": "iscsi", 00:04:46.826 "config": [ 00:04:46.826 { 00:04:46.826 "method": "iscsi_set_options", 00:04:46.826 "params": { 00:04:46.826 "node_base": "iqn.2016-06.io.spdk", 00:04:46.826 "max_sessions": 128, 00:04:46.826 "max_connections_per_session": 2, 00:04:46.826 "max_queue_depth": 64, 00:04:46.826 "default_time2wait": 2, 00:04:46.826 "default_time2retain": 20, 00:04:46.826 "first_burst_length": 8192, 00:04:46.826 "immediate_data": true, 00:04:46.826 "allow_duplicated_isid": false, 00:04:46.826 "error_recovery_level": 0, 00:04:46.826 "nop_timeout": 60, 00:04:46.826 "nop_in_interval": 30, 00:04:46.826 "disable_chap": false, 00:04:46.827 "require_chap": false, 00:04:46.827 "mutual_chap": false, 00:04:46.827 "chap_group": 0, 00:04:46.827 "max_large_datain_per_connection": 64, 00:04:46.827 "max_r2t_per_connection": 4, 00:04:46.827 "pdu_pool_size": 36864, 00:04:46.827 "immediate_data_pool_size": 16384, 00:04:46.827 "data_out_pool_size": 2048 00:04:46.827 } 00:04:46.827 } 00:04:46.827 ] 00:04:46.827 } 00:04:46.827 ] 00:04:46.827 } 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1496623 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1496623 ']' 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1496623 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1496623 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1496623' 00:04:46.827 killing process with pid 1496623 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1496623 00:04:46.827 10:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1496623 00:04:47.086 10:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1496647 00:04:47.086 10:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:47.086 10:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1496647 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1496647 ']' 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1496647 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1496647 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1496647' 00:04:52.360 killing process with pid 1496647 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1496647 00:04:52.360 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1496647 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.619 00:04:52.619 real 0m6.294s 00:04:52.619 user 0m5.987s 00:04:52.619 sys 0m0.606s 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.619 ************************************ 00:04:52.619 END TEST skip_rpc_with_json 00:04:52.619 ************************************ 00:04:52.619 10:31:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:52.619 10:31:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.619 10:31:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.619 10:31:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.619 ************************************ 00:04:52.619 START TEST skip_rpc_with_delay 00:04:52.619 ************************************ 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.619 10:31:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.619 [2024-11-19 10:32:00.015283] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:52.619 10:32:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:52.619 10:32:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.619 10:32:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.619 10:32:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.619 00:04:52.619 real 0m0.074s 00:04:52.619 user 0m0.051s 00:04:52.619 sys 0m0.022s 00:04:52.619 10:32:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.619 10:32:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:52.619 ************************************ 00:04:52.619 END TEST skip_rpc_with_delay 00:04:52.619 ************************************ 00:04:52.619 10:32:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:52.619 10:32:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:52.619 10:32:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:52.619 10:32:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.619 10:32:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.619 10:32:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.879 ************************************ 00:04:52.879 START TEST exit_on_failed_rpc_init 00:04:52.879 ************************************ 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1497706 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1497706 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1497706 ']' 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.879 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.879 [2024-11-19 10:32:00.158809] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:04:52.879 [2024-11-19 10:32:00.158849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497706 ] 00:04:52.879 [2024-11-19 10:32:00.236297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.879 [2024-11-19 10:32:00.279027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.138 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.138 [2024-11-19 10:32:00.553632] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:04:53.138 [2024-11-19 10:32:00.553678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497859 ] 00:04:53.397 [2024-11-19 10:32:00.630714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.397 [2024-11-19 10:32:00.672302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.397 [2024-11-19 10:32:00.672356] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:53.397 [2024-11-19 10:32:00.672365] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:53.397 [2024-11-19 10:32:00.672374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1497706 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1497706 ']' 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1497706 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1497706 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1497706' 00:04:53.397 killing process with pid 1497706 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1497706 00:04:53.397 10:32:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1497706 00:04:53.657 00:04:53.657 real 0m0.961s 00:04:53.657 user 0m1.025s 00:04:53.657 sys 0m0.388s 00:04:53.657 10:32:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.657 10:32:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.657 ************************************ 00:04:53.657 END TEST exit_on_failed_rpc_init 00:04:53.657 ************************************ 00:04:53.657 10:32:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.657 00:04:53.657 real 0m13.172s 00:04:53.657 user 0m12.416s 00:04:53.657 sys 0m1.581s 00:04:53.657 10:32:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.657 10:32:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.657 ************************************ 00:04:53.657 END TEST skip_rpc 00:04:53.657 ************************************ 00:04:53.915 10:32:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.915 10:32:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.915 10:32:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.915 10:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.915 ************************************ 00:04:53.916 START TEST rpc_client 00:04:53.916 ************************************ 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.916 * Looking for test storage... 00:04:53.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.916 10:32:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.916 --rc genhtml_branch_coverage=1 00:04:53.916 --rc genhtml_function_coverage=1 00:04:53.916 --rc genhtml_legend=1 00:04:53.916 --rc geninfo_all_blocks=1 00:04:53.916 --rc geninfo_unexecuted_blocks=1 00:04:53.916 00:04:53.916 ' 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.916 --rc genhtml_branch_coverage=1 00:04:53.916 --rc genhtml_function_coverage=1 00:04:53.916 --rc genhtml_legend=1 00:04:53.916 --rc geninfo_all_blocks=1 00:04:53.916 --rc geninfo_unexecuted_blocks=1 00:04:53.916 00:04:53.916 ' 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.916 --rc genhtml_branch_coverage=1 00:04:53.916 --rc genhtml_function_coverage=1 00:04:53.916 --rc genhtml_legend=1 00:04:53.916 --rc geninfo_all_blocks=1 00:04:53.916 --rc geninfo_unexecuted_blocks=1 00:04:53.916 00:04:53.916 ' 00:04:53.916 10:32:01 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.916 --rc genhtml_branch_coverage=1 00:04:53.916 --rc genhtml_function_coverage=1 00:04:53.916 --rc genhtml_legend=1 00:04:53.916 --rc geninfo_all_blocks=1 00:04:53.916 --rc geninfo_unexecuted_blocks=1 00:04:53.916 00:04:53.916 ' 00:04:53.916 10:32:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:53.916 OK 00:04:54.176 10:32:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:54.176 00:04:54.176 real 0m0.194s 00:04:54.176 user 0m0.112s 00:04:54.176 sys 0m0.095s 00:04:54.176 10:32:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.176 10:32:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:54.176 ************************************ 00:04:54.176 END TEST rpc_client 00:04:54.176 ************************************ 00:04:54.176 10:32:01 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:54.176 10:32:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.176 10:32:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.176 10:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:54.176 ************************************ 00:04:54.176 START TEST json_config 00:04:54.176 ************************************ 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.176 10:32:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.176 10:32:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.176 10:32:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.176 10:32:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.176 10:32:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.176 10:32:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.176 10:32:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.176 10:32:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.176 10:32:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.176 10:32:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.176 10:32:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.176 10:32:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:54.176 10:32:01 json_config -- scripts/common.sh@345 -- # : 1 00:04:54.176 10:32:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.176 10:32:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.176 10:32:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:54.176 10:32:01 json_config -- scripts/common.sh@353 -- # local d=1 00:04:54.176 10:32:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.176 10:32:01 json_config -- scripts/common.sh@355 -- # echo 1 00:04:54.176 10:32:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.176 10:32:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:54.176 10:32:01 json_config -- scripts/common.sh@353 -- # local d=2 00:04:54.176 10:32:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.176 10:32:01 json_config -- scripts/common.sh@355 -- # echo 2 00:04:54.176 10:32:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.176 10:32:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.176 10:32:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.176 10:32:01 json_config -- scripts/common.sh@368 -- # return 0 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.176 --rc genhtml_branch_coverage=1 00:04:54.176 --rc genhtml_function_coverage=1 00:04:54.176 --rc genhtml_legend=1 00:04:54.176 --rc geninfo_all_blocks=1 00:04:54.176 --rc geninfo_unexecuted_blocks=1 00:04:54.176 00:04:54.176 ' 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.176 --rc genhtml_branch_coverage=1 00:04:54.176 --rc genhtml_function_coverage=1 00:04:54.176 --rc genhtml_legend=1 00:04:54.176 --rc geninfo_all_blocks=1 00:04:54.176 --rc geninfo_unexecuted_blocks=1 00:04:54.176 00:04:54.176 ' 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.176 --rc genhtml_branch_coverage=1 00:04:54.176 --rc genhtml_function_coverage=1 00:04:54.176 --rc genhtml_legend=1 00:04:54.176 --rc geninfo_all_blocks=1 00:04:54.176 --rc geninfo_unexecuted_blocks=1 00:04:54.176 00:04:54.176 ' 00:04:54.176 10:32:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.176 --rc genhtml_branch_coverage=1 00:04:54.176 --rc genhtml_function_coverage=1 00:04:54.176 --rc genhtml_legend=1 00:04:54.176 --rc geninfo_all_blocks=1 00:04:54.176 --rc geninfo_unexecuted_blocks=1 00:04:54.176 00:04:54.176 ' 00:04:54.176 10:32:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.176 10:32:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.176 10:32:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.176 10:32:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.176 10:32:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.177 10:32:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.177 10:32:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.177 10:32:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.177 10:32:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.177 10:32:01 json_config -- paths/export.sh@5 -- # export PATH 00:04:54.177 10:32:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@51 -- # : 0 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.177 10:32:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:54.177 INFO: JSON configuration test init 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:54.177 10:32:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:54.177 10:32:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.177 10:32:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.436 10:32:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:54.436 10:32:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.436 10:32:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.436 10:32:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:54.436 10:32:01 json_config -- json_config/common.sh@9 -- # local app=target 00:04:54.436 10:32:01 json_config -- json_config/common.sh@10 -- # shift 00:04:54.436 10:32:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:54.436 10:32:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:54.436 10:32:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:54.436 10:32:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.436 10:32:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.436 10:32:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1498162 00:04:54.436 10:32:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:54.436 Waiting for target to run... 00:04:54.436 10:32:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:54.436 10:32:01 json_config -- json_config/common.sh@25 -- # waitforlisten 1498162 /var/tmp/spdk_tgt.sock 00:04:54.436 10:32:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 1498162 ']' 00:04:54.436 10:32:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:54.436 10:32:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.436 10:32:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:54.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:54.436 10:32:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.436 10:32:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.436 [2024-11-19 10:32:01.688118] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:04:54.436 [2024-11-19 10:32:01.688172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498162 ] 00:04:54.695 [2024-11-19 10:32:02.141922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.953 [2024-11-19 10:32:02.196629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.212 10:32:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.212 10:32:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:55.212 10:32:02 json_config -- json_config/common.sh@26 -- # echo '' 00:04:55.212 00:04:55.212 10:32:02 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:55.212 10:32:02 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:55.212 10:32:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.212 10:32:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.212 10:32:02 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:55.212 10:32:02 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:55.212 10:32:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.212 10:32:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.212 10:32:02 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:55.212 10:32:02 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:55.212 10:32:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:58.501 10:32:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.501 10:32:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:58.501 10:32:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@54 -- # sort 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:58.501 10:32:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.501 10:32:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:58.501 10:32:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.501 10:32:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:58.501 10:32:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:58.501 10:32:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:58.760 MallocForNvmf0 00:04:58.760 10:32:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:58.760 10:32:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.019 MallocForNvmf1 00:04:59.019 10:32:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.019 10:32:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.277 [2024-11-19 10:32:06.495277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.277 10:32:06 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.278 10:32:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.278 10:32:06 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.278 10:32:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.536 10:32:06 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.536 10:32:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.795 10:32:07 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:59.795 10:32:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.054 [2024-11-19 10:32:07.285749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.054 10:32:07 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:00.054 10:32:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.054 10:32:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.054 10:32:07 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:00.054 10:32:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.054 10:32:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.054 10:32:07 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:00.054 10:32:07 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.054 10:32:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.314 MallocBdevForConfigChangeCheck 00:05:00.314 10:32:07 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:00.314 10:32:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.314 10:32:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.314 10:32:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:00.314 10:32:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.573 10:32:07 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:00.573 INFO: shutting down applications... 00:05:00.573 10:32:07 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:00.573 10:32:07 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:00.573 10:32:07 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:00.573 10:32:07 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:02.477 Calling clear_iscsi_subsystem 00:05:02.477 Calling clear_nvmf_subsystem 00:05:02.477 Calling clear_nbd_subsystem 00:05:02.477 Calling clear_ublk_subsystem 00:05:02.477 Calling clear_vhost_blk_subsystem 00:05:02.477 Calling clear_vhost_scsi_subsystem 00:05:02.477 Calling clear_bdev_subsystem 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@352 -- # break 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:02.477 10:32:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:02.477 10:32:09 json_config -- json_config/common.sh@31 -- # local app=target 00:05:02.477 10:32:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.477 10:32:09 json_config -- json_config/common.sh@35 -- # [[ -n 1498162 ]] 00:05:02.477 10:32:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1498162 00:05:02.477 10:32:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.477 10:32:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.477 10:32:09 json_config -- json_config/common.sh@41 -- # kill -0 1498162 00:05:02.477 10:32:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.045 10:32:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.045 10:32:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.045 10:32:10 json_config -- json_config/common.sh@41 -- # kill -0 1498162 00:05:03.045 10:32:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.045 10:32:10 json_config -- json_config/common.sh@43 -- # break 00:05:03.045 10:32:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.045 10:32:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.045 SPDK target shutdown done 00:05:03.045 10:32:10 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:03.045 INFO: relaunching applications... 00:05:03.045 10:32:10 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.045 10:32:10 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.045 10:32:10 json_config -- json_config/common.sh@10 -- # shift 00:05:03.045 10:32:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.045 10:32:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.045 10:32:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.045 10:32:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.045 10:32:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.045 10:32:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1499752 00:05:03.045 10:32:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.045 Waiting for target to run... 00:05:03.045 10:32:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.045 10:32:10 json_config -- json_config/common.sh@25 -- # waitforlisten 1499752 /var/tmp/spdk_tgt.sock 00:05:03.045 10:32:10 json_config -- common/autotest_common.sh@835 -- # '[' -z 1499752 ']' 00:05:03.045 10:32:10 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.045 10:32:10 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.045 10:32:10 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.045 10:32:10 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.045 10:32:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.045 [2024-11-19 10:32:10.456890] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:03.045 [2024-11-19 10:32:10.456941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499752 ] 00:05:03.309 [2024-11-19 10:32:10.747653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.640 [2024-11-19 10:32:10.785274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.967 [2024-11-19 10:32:13.820852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.967 [2024-11-19 10:32:13.853196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.967 10:32:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.967 10:32:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:06.967 10:32:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:06.967 00:05:06.967 10:32:13 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:06.967 10:32:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:06.967 INFO: Checking if target configuration is the same... 00:05:06.967 10:32:13 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.967 10:32:13 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:06.967 10:32:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.967 + '[' 2 -ne 2 ']' 00:05:06.967 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:06.967 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:06.967 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.967 +++ basename /dev/fd/62 00:05:06.967 ++ mktemp /tmp/62.XXX 00:05:06.967 + tmp_file_1=/tmp/62.gwL 00:05:06.967 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.967 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:06.967 + tmp_file_2=/tmp/spdk_tgt_config.json.W8Q 00:05:06.967 + ret=0 00:05:06.967 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.967 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.967 + diff -u /tmp/62.gwL /tmp/spdk_tgt_config.json.W8Q 00:05:06.967 + echo 'INFO: JSON config files are the same' 00:05:06.967 INFO: JSON config files are the same 00:05:06.967 + rm /tmp/62.gwL /tmp/spdk_tgt_config.json.W8Q 00:05:06.967 + exit 0 00:05:06.967 10:32:14 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:06.967 10:32:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:06.967 INFO: changing configuration and checking if this can be detected... 00:05:06.967 10:32:14 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.967 10:32:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.226 10:32:14 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.226 10:32:14 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:07.226 10:32:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.226 + '[' 2 -ne 2 ']' 00:05:07.226 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.226 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.226 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.226 +++ basename /dev/fd/62 00:05:07.226 ++ mktemp /tmp/62.XXX 00:05:07.226 + tmp_file_1=/tmp/62.hSr 00:05:07.226 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.226 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.226 + tmp_file_2=/tmp/spdk_tgt_config.json.Nsw 00:05:07.226 + ret=0 00:05:07.226 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.486 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.486 + diff -u /tmp/62.hSr /tmp/spdk_tgt_config.json.Nsw 00:05:07.486 + ret=1 00:05:07.486 + echo '=== Start of file: /tmp/62.hSr ===' 00:05:07.486 + cat /tmp/62.hSr 00:05:07.486 + echo '=== End of file: /tmp/62.hSr ===' 00:05:07.486 + echo '' 00:05:07.486 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Nsw ===' 00:05:07.486 + cat /tmp/spdk_tgt_config.json.Nsw 00:05:07.486 + echo '=== End of file: /tmp/spdk_tgt_config.json.Nsw ===' 00:05:07.486 + echo '' 00:05:07.486 + rm /tmp/62.hSr /tmp/spdk_tgt_config.json.Nsw 00:05:07.486 + exit 1 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:07.486 INFO: configuration change detected. 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 1499752 ]] 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.486 10:32:14 json_config -- json_config/json_config.sh@330 -- # killprocess 1499752 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@954 -- # '[' -z 1499752 ']' 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@958 -- # kill -0 1499752 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@959 -- # uname 00:05:07.486 10:32:14 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.745 10:32:14 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1499752 00:05:07.745 10:32:14 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.745 10:32:14 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.745 10:32:14 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1499752' 00:05:07.745 killing process with pid 1499752 00:05:07.745 10:32:14 json_config -- common/autotest_common.sh@973 -- # kill 1499752 00:05:07.745 10:32:14 json_config -- common/autotest_common.sh@978 -- # wait 1499752 00:05:09.123 10:32:16 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.124 10:32:16 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:09.124 10:32:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.124 10:32:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.124 10:32:16 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:09.124 10:32:16 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:09.124 INFO: Success 00:05:09.124 00:05:09.124 real 0m15.045s 00:05:09.124 user 0m15.521s 00:05:09.124 sys 0m2.564s 00:05:09.124 10:32:16 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.124 10:32:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.124 ************************************ 00:05:09.124 END TEST json_config 00:05:09.124 ************************************ 00:05:09.124 10:32:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.124 10:32:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.124 10:32:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.124 10:32:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.124 ************************************ 00:05:09.124 START TEST json_config_extra_key 00:05:09.124 ************************************ 00:05:09.124 10:32:16 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.384 10:32:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.384 --rc genhtml_branch_coverage=1 00:05:09.384 --rc genhtml_function_coverage=1 00:05:09.384 --rc genhtml_legend=1 00:05:09.384 --rc geninfo_all_blocks=1 00:05:09.384 --rc geninfo_unexecuted_blocks=1 00:05:09.384 00:05:09.384 ' 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.384 --rc genhtml_branch_coverage=1 00:05:09.384 --rc genhtml_function_coverage=1 00:05:09.384 --rc genhtml_legend=1 00:05:09.384 --rc geninfo_all_blocks=1 00:05:09.384 --rc geninfo_unexecuted_blocks=1 00:05:09.384 00:05:09.384 ' 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.384 --rc genhtml_branch_coverage=1 00:05:09.384 --rc genhtml_function_coverage=1 00:05:09.384 --rc genhtml_legend=1 00:05:09.384 --rc geninfo_all_blocks=1 00:05:09.384 --rc geninfo_unexecuted_blocks=1 00:05:09.384 00:05:09.384 ' 00:05:09.384 10:32:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.384 --rc genhtml_branch_coverage=1 00:05:09.384 --rc genhtml_function_coverage=1 00:05:09.384 --rc genhtml_legend=1 00:05:09.384 --rc geninfo_all_blocks=1 00:05:09.384 --rc geninfo_unexecuted_blocks=1 00:05:09.384 00:05:09.384 ' 00:05:09.384 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.384 10:32:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.385 10:32:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.385 10:32:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.385 10:32:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.385 10:32:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.385 10:32:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.385 10:32:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.385 10:32:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.385 10:32:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.385 10:32:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.385 10:32:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.385 INFO: launching applications... 00:05:09.385 10:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1500916 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.385 Waiting for target to run... 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1500916 /var/tmp/spdk_tgt.sock 00:05:09.385 10:32:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1500916 ']' 00:05:09.385 10:32:16 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.385 10:32:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.385 10:32:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.385 10:32:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.385 10:32:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.385 10:32:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.385 [2024-11-19 10:32:16.786664] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:09.385 [2024-11-19 10:32:16.786719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500916 ] 00:05:09.954 [2024-11-19 10:32:17.241112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.954 [2024-11-19 10:32:17.299436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.213 10:32:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.213 10:32:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:10.213 00:05:10.213 10:32:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:10.213 INFO: shutting down applications... 00:05:10.213 10:32:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1500916 ]] 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1500916 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1500916 00:05:10.213 10:32:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.781 10:32:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.781 10:32:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.781 10:32:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1500916 00:05:10.781 10:32:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:10.781 10:32:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:10.781 10:32:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:10.781 10:32:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:10.781 SPDK target shutdown done 00:05:10.781 10:32:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:10.781 Success 00:05:10.781 00:05:10.781 real 0m1.578s 00:05:10.781 user 0m1.203s 00:05:10.781 sys 0m0.565s 00:05:10.781 10:32:18 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.781 10:32:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.781 ************************************ 00:05:10.781 END TEST json_config_extra_key 00:05:10.781 ************************************ 00:05:10.781 10:32:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:10.781 10:32:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.781 10:32:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.781 10:32:18 -- common/autotest_common.sh@10 -- # set +x 00:05:10.781 ************************************ 00:05:10.781 START TEST alias_rpc 00:05:10.781 ************************************ 00:05:10.781 10:32:18 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.041 * Looking for test storage... 00:05:11.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.041 10:32:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.041 --rc genhtml_branch_coverage=1 00:05:11.041 --rc genhtml_function_coverage=1 00:05:11.041 --rc genhtml_legend=1 00:05:11.041 --rc geninfo_all_blocks=1 00:05:11.041 --rc geninfo_unexecuted_blocks=1 00:05:11.041 00:05:11.041 ' 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.041 --rc genhtml_branch_coverage=1 00:05:11.041 --rc genhtml_function_coverage=1 00:05:11.041 --rc genhtml_legend=1 00:05:11.041 --rc geninfo_all_blocks=1 00:05:11.041 --rc geninfo_unexecuted_blocks=1 00:05:11.041 00:05:11.041 ' 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.041 --rc genhtml_branch_coverage=1 00:05:11.041 --rc genhtml_function_coverage=1 00:05:11.041 --rc genhtml_legend=1 00:05:11.041 --rc geninfo_all_blocks=1 00:05:11.041 --rc geninfo_unexecuted_blocks=1 00:05:11.041 00:05:11.041 ' 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.041 --rc genhtml_branch_coverage=1 00:05:11.041 --rc genhtml_function_coverage=1 00:05:11.041 --rc genhtml_legend=1 00:05:11.041 --rc geninfo_all_blocks=1 00:05:11.041 --rc geninfo_unexecuted_blocks=1 00:05:11.041 00:05:11.041 ' 00:05:11.041 10:32:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.041 10:32:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1501312 00:05:11.041 10:32:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1501312 00:05:11.041 10:32:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1501312 ']' 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.041 10:32:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 [2024-11-19 10:32:18.436255] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:11.041 [2024-11-19 10:32:18.436307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501312 ] 00:05:11.301 [2024-11-19 10:32:18.512700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.301 [2024-11-19 10:32:18.555452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.560 10:32:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.560 10:32:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.560 10:32:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:11.560 10:32:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1501312 00:05:11.560 10:32:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1501312 ']' 00:05:11.560 10:32:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1501312 00:05:11.560 10:32:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.560 10:32:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.560 10:32:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501312 00:05:11.819 10:32:19 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.819 10:32:19 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.819 10:32:19 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501312' 00:05:11.819 killing process with pid 1501312 00:05:11.819 10:32:19 alias_rpc -- common/autotest_common.sh@973 -- # kill 1501312 00:05:11.819 10:32:19 alias_rpc -- common/autotest_common.sh@978 -- # wait 1501312 00:05:12.078 00:05:12.078 real 0m1.134s 00:05:12.078 user 0m1.160s 00:05:12.078 sys 0m0.416s 00:05:12.078 10:32:19 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.078 10:32:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.078 ************************************ 00:05:12.078 END TEST alias_rpc 00:05:12.078 ************************************ 00:05:12.078 10:32:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:12.078 10:32:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:12.078 10:32:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.078 10:32:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.078 10:32:19 -- common/autotest_common.sh@10 -- # set +x 00:05:12.078 ************************************ 00:05:12.078 START TEST spdkcli_tcp 00:05:12.078 ************************************ 00:05:12.078 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:12.078 * Looking for test storage... 00:05:12.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:12.078 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.078 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.078 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.339 10:32:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.339 --rc genhtml_branch_coverage=1 00:05:12.339 --rc genhtml_function_coverage=1 00:05:12.339 --rc genhtml_legend=1 00:05:12.339 --rc geninfo_all_blocks=1 00:05:12.339 --rc geninfo_unexecuted_blocks=1 00:05:12.339 00:05:12.339 ' 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.339 --rc genhtml_branch_coverage=1 00:05:12.339 --rc genhtml_function_coverage=1 00:05:12.339 --rc genhtml_legend=1 00:05:12.339 --rc geninfo_all_blocks=1 00:05:12.339 --rc geninfo_unexecuted_blocks=1 00:05:12.339 00:05:12.339 ' 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.339 --rc genhtml_branch_coverage=1 00:05:12.339 --rc genhtml_function_coverage=1 00:05:12.339 --rc genhtml_legend=1 00:05:12.339 --rc geninfo_all_blocks=1 00:05:12.339 --rc geninfo_unexecuted_blocks=1 00:05:12.339 00:05:12.339 ' 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.339 --rc genhtml_branch_coverage=1 00:05:12.339 --rc genhtml_function_coverage=1 00:05:12.339 --rc genhtml_legend=1 00:05:12.339 --rc geninfo_all_blocks=1 00:05:12.339 --rc geninfo_unexecuted_blocks=1 00:05:12.339 00:05:12.339 ' 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1501538 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1501538 00:05:12.339 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1501538 ']' 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.339 10:32:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.339 [2024-11-19 10:32:19.647532] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:12.339 [2024-11-19 10:32:19.647584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501538 ] 00:05:12.339 [2024-11-19 10:32:19.711541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.339 [2024-11-19 10:32:19.753748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.339 [2024-11-19 10:32:19.753748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.599 10:32:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.599 10:32:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:12.599 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1501613 00:05:12.599 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:12.599 10:32:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:12.859 [ 00:05:12.859 "bdev_malloc_delete", 00:05:12.859 "bdev_malloc_create", 00:05:12.859 "bdev_null_resize", 00:05:12.859 "bdev_null_delete", 00:05:12.859 "bdev_null_create", 00:05:12.859 "bdev_nvme_cuse_unregister", 00:05:12.859 "bdev_nvme_cuse_register", 00:05:12.859 "bdev_opal_new_user", 00:05:12.859 "bdev_opal_set_lock_state", 00:05:12.859 "bdev_opal_delete", 00:05:12.859 "bdev_opal_get_info", 00:05:12.859 "bdev_opal_create", 00:05:12.859 "bdev_nvme_opal_revert", 00:05:12.859 "bdev_nvme_opal_init", 00:05:12.859 "bdev_nvme_send_cmd", 00:05:12.859 "bdev_nvme_set_keys", 00:05:12.859 "bdev_nvme_get_path_iostat", 00:05:12.859 "bdev_nvme_get_mdns_discovery_info", 00:05:12.859 "bdev_nvme_stop_mdns_discovery", 00:05:12.859 "bdev_nvme_start_mdns_discovery", 00:05:12.859 "bdev_nvme_set_multipath_policy", 00:05:12.859 "bdev_nvme_set_preferred_path", 00:05:12.859 "bdev_nvme_get_io_paths", 00:05:12.859 "bdev_nvme_remove_error_injection", 00:05:12.859 "bdev_nvme_add_error_injection", 00:05:12.859 "bdev_nvme_get_discovery_info", 00:05:12.859 "bdev_nvme_stop_discovery", 00:05:12.859 "bdev_nvme_start_discovery", 00:05:12.859 "bdev_nvme_get_controller_health_info", 00:05:12.859 "bdev_nvme_disable_controller", 00:05:12.859 "bdev_nvme_enable_controller", 00:05:12.859 "bdev_nvme_reset_controller", 00:05:12.859 "bdev_nvme_get_transport_statistics", 00:05:12.859 "bdev_nvme_apply_firmware", 00:05:12.859 "bdev_nvme_detach_controller", 00:05:12.859 "bdev_nvme_get_controllers", 00:05:12.859 "bdev_nvme_attach_controller", 00:05:12.859 "bdev_nvme_set_hotplug", 00:05:12.859 "bdev_nvme_set_options", 00:05:12.859 "bdev_passthru_delete", 00:05:12.859 "bdev_passthru_create", 00:05:12.859 "bdev_lvol_set_parent_bdev", 00:05:12.859 "bdev_lvol_set_parent", 00:05:12.859 "bdev_lvol_check_shallow_copy", 00:05:12.859 "bdev_lvol_start_shallow_copy", 00:05:12.859 "bdev_lvol_grow_lvstore", 00:05:12.859 "bdev_lvol_get_lvols", 00:05:12.859 "bdev_lvol_get_lvstores", 00:05:12.859 "bdev_lvol_delete", 00:05:12.859 "bdev_lvol_set_read_only", 00:05:12.859 "bdev_lvol_resize", 00:05:12.859 "bdev_lvol_decouple_parent", 00:05:12.859 "bdev_lvol_inflate", 00:05:12.859 "bdev_lvol_rename", 00:05:12.859 "bdev_lvol_clone_bdev", 00:05:12.859 "bdev_lvol_clone", 00:05:12.859 "bdev_lvol_snapshot", 00:05:12.859 "bdev_lvol_create", 00:05:12.859 "bdev_lvol_delete_lvstore", 00:05:12.859 "bdev_lvol_rename_lvstore", 00:05:12.859 "bdev_lvol_create_lvstore", 00:05:12.859 "bdev_raid_set_options", 00:05:12.859 "bdev_raid_remove_base_bdev", 00:05:12.859 "bdev_raid_add_base_bdev", 00:05:12.859 "bdev_raid_delete", 00:05:12.859 "bdev_raid_create", 00:05:12.859 "bdev_raid_get_bdevs", 00:05:12.859 "bdev_error_inject_error", 00:05:12.859 "bdev_error_delete", 00:05:12.859 "bdev_error_create", 00:05:12.859 "bdev_split_delete", 00:05:12.859 "bdev_split_create", 00:05:12.859 "bdev_delay_delete", 00:05:12.859 "bdev_delay_create", 00:05:12.859 "bdev_delay_update_latency", 00:05:12.859 "bdev_zone_block_delete", 00:05:12.859 "bdev_zone_block_create", 00:05:12.859 "blobfs_create", 00:05:12.859 "blobfs_detect", 00:05:12.859 "blobfs_set_cache_size", 00:05:12.859 "bdev_aio_delete", 00:05:12.859 "bdev_aio_rescan", 00:05:12.859 "bdev_aio_create", 00:05:12.859 "bdev_ftl_set_property", 00:05:12.859 "bdev_ftl_get_properties", 00:05:12.859 "bdev_ftl_get_stats", 00:05:12.859 "bdev_ftl_unmap", 00:05:12.859 "bdev_ftl_unload", 00:05:12.859 "bdev_ftl_delete", 00:05:12.859 "bdev_ftl_load", 00:05:12.859 "bdev_ftl_create", 00:05:12.859 "bdev_virtio_attach_controller", 00:05:12.859 "bdev_virtio_scsi_get_devices", 00:05:12.859 "bdev_virtio_detach_controller", 00:05:12.859 "bdev_virtio_blk_set_hotplug", 00:05:12.859 "bdev_iscsi_delete", 00:05:12.859 "bdev_iscsi_create", 00:05:12.859 "bdev_iscsi_set_options", 00:05:12.859 "accel_error_inject_error", 00:05:12.859 "ioat_scan_accel_module", 00:05:12.859 "dsa_scan_accel_module", 00:05:12.859 "iaa_scan_accel_module", 00:05:12.859 "vfu_virtio_create_fs_endpoint", 00:05:12.859 "vfu_virtio_create_scsi_endpoint", 00:05:12.859 "vfu_virtio_scsi_remove_target", 00:05:12.859 "vfu_virtio_scsi_add_target", 00:05:12.859 "vfu_virtio_create_blk_endpoint", 00:05:12.859 "vfu_virtio_delete_endpoint", 00:05:12.859 "keyring_file_remove_key", 00:05:12.859 "keyring_file_add_key", 00:05:12.859 "keyring_linux_set_options", 00:05:12.859 "fsdev_aio_delete", 00:05:12.859 "fsdev_aio_create", 00:05:12.859 "iscsi_get_histogram", 00:05:12.859 "iscsi_enable_histogram", 00:05:12.859 "iscsi_set_options", 00:05:12.859 "iscsi_get_auth_groups", 00:05:12.859 "iscsi_auth_group_remove_secret", 00:05:12.859 "iscsi_auth_group_add_secret", 00:05:12.859 "iscsi_delete_auth_group", 00:05:12.859 "iscsi_create_auth_group", 00:05:12.859 "iscsi_set_discovery_auth", 00:05:12.859 "iscsi_get_options", 00:05:12.859 "iscsi_target_node_request_logout", 00:05:12.859 "iscsi_target_node_set_redirect", 00:05:12.859 "iscsi_target_node_set_auth", 00:05:12.859 "iscsi_target_node_add_lun", 00:05:12.859 "iscsi_get_stats", 00:05:12.859 "iscsi_get_connections", 00:05:12.859 "iscsi_portal_group_set_auth", 00:05:12.859 "iscsi_start_portal_group", 00:05:12.859 "iscsi_delete_portal_group", 00:05:12.859 "iscsi_create_portal_group", 00:05:12.859 "iscsi_get_portal_groups", 00:05:12.859 "iscsi_delete_target_node", 00:05:12.859 "iscsi_target_node_remove_pg_ig_maps", 00:05:12.859 "iscsi_target_node_add_pg_ig_maps", 00:05:12.859 "iscsi_create_target_node", 00:05:12.859 "iscsi_get_target_nodes", 00:05:12.859 "iscsi_delete_initiator_group", 00:05:12.859 "iscsi_initiator_group_remove_initiators", 00:05:12.859 "iscsi_initiator_group_add_initiators", 00:05:12.859 "iscsi_create_initiator_group", 00:05:12.859 "iscsi_get_initiator_groups", 00:05:12.859 "nvmf_set_crdt", 00:05:12.859 "nvmf_set_config", 00:05:12.859 "nvmf_set_max_subsystems", 00:05:12.859 "nvmf_stop_mdns_prr", 00:05:12.859 "nvmf_publish_mdns_prr", 00:05:12.859 "nvmf_subsystem_get_listeners", 00:05:12.859 "nvmf_subsystem_get_qpairs", 00:05:12.859 "nvmf_subsystem_get_controllers", 00:05:12.859 "nvmf_get_stats", 00:05:12.859 "nvmf_get_transports", 00:05:12.859 "nvmf_create_transport", 00:05:12.859 "nvmf_get_targets", 00:05:12.859 "nvmf_delete_target", 00:05:12.859 "nvmf_create_target", 00:05:12.859 "nvmf_subsystem_allow_any_host", 00:05:12.859 "nvmf_subsystem_set_keys", 00:05:12.859 "nvmf_subsystem_remove_host", 00:05:12.859 "nvmf_subsystem_add_host", 00:05:12.859 "nvmf_ns_remove_host", 00:05:12.859 "nvmf_ns_add_host", 00:05:12.859 "nvmf_subsystem_remove_ns", 00:05:12.859 "nvmf_subsystem_set_ns_ana_group", 00:05:12.859 "nvmf_subsystem_add_ns", 00:05:12.859 "nvmf_subsystem_listener_set_ana_state", 00:05:12.859 "nvmf_discovery_get_referrals", 00:05:12.859 "nvmf_discovery_remove_referral", 00:05:12.859 "nvmf_discovery_add_referral", 00:05:12.860 "nvmf_subsystem_remove_listener", 00:05:12.860 "nvmf_subsystem_add_listener", 00:05:12.860 "nvmf_delete_subsystem", 00:05:12.860 "nvmf_create_subsystem", 00:05:12.860 "nvmf_get_subsystems", 00:05:12.860 "env_dpdk_get_mem_stats", 00:05:12.860 "nbd_get_disks", 00:05:12.860 "nbd_stop_disk", 00:05:12.860 "nbd_start_disk", 00:05:12.860 "ublk_recover_disk", 00:05:12.860 "ublk_get_disks", 00:05:12.860 "ublk_stop_disk", 00:05:12.860 "ublk_start_disk", 00:05:12.860 "ublk_destroy_target", 00:05:12.860 "ublk_create_target", 00:05:12.860 "virtio_blk_create_transport", 00:05:12.860 "virtio_blk_get_transports", 00:05:12.860 "vhost_controller_set_coalescing", 00:05:12.860 "vhost_get_controllers", 00:05:12.860 "vhost_delete_controller", 00:05:12.860 "vhost_create_blk_controller", 00:05:12.860 "vhost_scsi_controller_remove_target", 00:05:12.860 "vhost_scsi_controller_add_target", 00:05:12.860 "vhost_start_scsi_controller", 00:05:12.860 "vhost_create_scsi_controller", 00:05:12.860 "thread_set_cpumask", 00:05:12.860 "scheduler_set_options", 00:05:12.860 "framework_get_governor", 00:05:12.860 "framework_get_scheduler", 00:05:12.860 "framework_set_scheduler", 00:05:12.860 "framework_get_reactors", 00:05:12.860 "thread_get_io_channels", 00:05:12.860 "thread_get_pollers", 00:05:12.860 "thread_get_stats", 00:05:12.860 "framework_monitor_context_switch", 00:05:12.860 "spdk_kill_instance", 00:05:12.860 "log_enable_timestamps", 00:05:12.860 "log_get_flags", 00:05:12.860 "log_clear_flag", 00:05:12.860 "log_set_flag", 00:05:12.860 "log_get_level", 00:05:12.860 "log_set_level", 00:05:12.860 "log_get_print_level", 00:05:12.860 "log_set_print_level", 00:05:12.860 "framework_enable_cpumask_locks", 00:05:12.860 "framework_disable_cpumask_locks", 00:05:12.860 "framework_wait_init", 00:05:12.860 "framework_start_init", 00:05:12.860 "scsi_get_devices", 00:05:12.860 "bdev_get_histogram", 00:05:12.860 "bdev_enable_histogram", 00:05:12.860 "bdev_set_qos_limit", 00:05:12.860 "bdev_set_qd_sampling_period", 00:05:12.860 "bdev_get_bdevs", 00:05:12.860 "bdev_reset_iostat", 00:05:12.860 "bdev_get_iostat", 00:05:12.860 "bdev_examine", 00:05:12.860 "bdev_wait_for_examine", 00:05:12.860 "bdev_set_options", 00:05:12.860 "accel_get_stats", 00:05:12.860 "accel_set_options", 00:05:12.860 "accel_set_driver", 00:05:12.860 "accel_crypto_key_destroy", 00:05:12.860 "accel_crypto_keys_get", 00:05:12.860 "accel_crypto_key_create", 00:05:12.860 "accel_assign_opc", 00:05:12.860 "accel_get_module_info", 00:05:12.860 "accel_get_opc_assignments", 00:05:12.860 "vmd_rescan", 00:05:12.860 "vmd_remove_device", 00:05:12.860 "vmd_enable", 00:05:12.860 "sock_get_default_impl", 00:05:12.860 "sock_set_default_impl", 00:05:12.860 "sock_impl_set_options", 00:05:12.860 "sock_impl_get_options", 00:05:12.860 "iobuf_get_stats", 00:05:12.860 "iobuf_set_options", 00:05:12.860 "keyring_get_keys", 00:05:12.860 "vfu_tgt_set_base_path", 00:05:12.860 "framework_get_pci_devices", 00:05:12.860 "framework_get_config", 00:05:12.860 "framework_get_subsystems", 00:05:12.860 "fsdev_set_opts", 00:05:12.860 "fsdev_get_opts", 00:05:12.860 "trace_get_info", 00:05:12.860 "trace_get_tpoint_group_mask", 00:05:12.860 "trace_disable_tpoint_group", 00:05:12.860 "trace_enable_tpoint_group", 00:05:12.860 "trace_clear_tpoint_mask", 00:05:12.860 "trace_set_tpoint_mask", 00:05:12.860 "notify_get_notifications", 00:05:12.860 "notify_get_types", 00:05:12.860 "spdk_get_version", 00:05:12.860 "rpc_get_methods" 00:05:12.860 ] 00:05:12.860 10:32:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.860 10:32:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:12.860 10:32:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1501538 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1501538 ']' 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1501538 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501538 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501538' 00:05:12.860 killing process with pid 1501538 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1501538 00:05:12.860 10:32:20 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1501538 00:05:13.119 00:05:13.119 real 0m1.151s 00:05:13.119 user 0m1.933s 00:05:13.119 sys 0m0.454s 00:05:13.119 10:32:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.119 10:32:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.119 ************************************ 00:05:13.119 END TEST spdkcli_tcp 00:05:13.119 ************************************ 00:05:13.378 10:32:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:13.378 10:32:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.378 10:32:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.378 10:32:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.378 ************************************ 00:05:13.378 START TEST dpdk_mem_utility 00:05:13.378 ************************************ 00:05:13.378 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:13.378 * Looking for test storage... 00:05:13.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:13.378 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.378 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.378 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.379 10:32:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.379 --rc genhtml_branch_coverage=1 00:05:13.379 --rc genhtml_function_coverage=1 00:05:13.379 --rc genhtml_legend=1 00:05:13.379 --rc geninfo_all_blocks=1 00:05:13.379 --rc geninfo_unexecuted_blocks=1 00:05:13.379 00:05:13.379 ' 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.379 --rc genhtml_branch_coverage=1 00:05:13.379 --rc genhtml_function_coverage=1 00:05:13.379 --rc genhtml_legend=1 00:05:13.379 --rc geninfo_all_blocks=1 00:05:13.379 --rc geninfo_unexecuted_blocks=1 00:05:13.379 00:05:13.379 ' 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.379 --rc genhtml_branch_coverage=1 00:05:13.379 --rc genhtml_function_coverage=1 00:05:13.379 --rc genhtml_legend=1 00:05:13.379 --rc geninfo_all_blocks=1 00:05:13.379 --rc geninfo_unexecuted_blocks=1 00:05:13.379 00:05:13.379 ' 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.379 --rc genhtml_branch_coverage=1 00:05:13.379 --rc genhtml_function_coverage=1 00:05:13.379 --rc genhtml_legend=1 00:05:13.379 --rc geninfo_all_blocks=1 00:05:13.379 --rc geninfo_unexecuted_blocks=1 00:05:13.379 00:05:13.379 ' 00:05:13.379 10:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.379 10:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.379 10:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1501736 00:05:13.379 10:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1501736 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1501736 ']' 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.379 10:32:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.638 [2024-11-19 10:32:20.843253] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:13.638 [2024-11-19 10:32:20.843301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501736 ] 00:05:13.638 [2024-11-19 10:32:20.919571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.638 [2024-11-19 10:32:20.962258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.898 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.898 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:13.898 10:32:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.898 10:32:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.898 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.898 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.898 { 00:05:13.898 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.898 } 00:05:13.898 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.898 10:32:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.898 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:13.898 1 heaps totaling size 810.000000 MiB 00:05:13.898 size: 810.000000 MiB heap id: 0 00:05:13.898 end heaps---------- 00:05:13.898 9 mempools totaling size 595.772034 MiB 00:05:13.898 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.898 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.898 size: 92.545471 MiB name: bdev_io_1501736 00:05:13.898 size: 50.003479 MiB name: msgpool_1501736 00:05:13.898 size: 36.509338 MiB name: fsdev_io_1501736 00:05:13.898 size: 21.763794 MiB name: PDU_Pool 00:05:13.898 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.898 size: 4.133484 MiB name: evtpool_1501736 00:05:13.898 size: 0.026123 MiB name: Session_Pool 00:05:13.898 end mempools------- 00:05:13.898 6 memzones totaling size 4.142822 MiB 00:05:13.898 size: 1.000366 MiB name: RG_ring_0_1501736 00:05:13.898 size: 1.000366 MiB name: RG_ring_1_1501736 00:05:13.898 size: 1.000366 MiB name: RG_ring_4_1501736 00:05:13.898 size: 1.000366 MiB name: RG_ring_5_1501736 00:05:13.898 size: 0.125366 MiB name: RG_ring_2_1501736 00:05:13.898 size: 0.015991 MiB name: RG_ring_3_1501736 00:05:13.898 end memzones------- 00:05:13.898 10:32:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.898 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:13.898 list of free elements. size: 10.862488 MiB 00:05:13.899 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:13.899 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:13.899 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:13.899 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:13.899 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:13.899 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:13.899 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:13.899 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:13.899 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:13.899 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:13.899 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:13.899 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:13.899 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:13.899 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:13.899 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:13.899 list of standard malloc elements. size: 199.218628 MiB 00:05:13.899 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:13.899 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:13.899 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:13.899 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:13.899 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:13.899 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:13.899 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:13.899 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:13.899 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:13.899 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:13.899 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:13.899 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:13.899 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:13.899 list of memzone associated elements. size: 599.918884 MiB 00:05:13.899 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:13.899 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.899 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:13.899 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.899 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:13.899 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1501736_0 00:05:13.899 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:13.899 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1501736_0 00:05:13.899 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:13.899 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1501736_0 00:05:13.899 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:13.899 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.899 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:13.899 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.899 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:13.899 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1501736_0 00:05:13.899 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:13.899 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1501736 00:05:13.899 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:13.899 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1501736 00:05:13.899 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:13.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.899 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:13.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.899 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:13.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.899 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:13.899 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.899 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:13.899 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1501736 00:05:13.899 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:13.899 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1501736 00:05:13.899 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:13.899 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1501736 00:05:13.899 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:13.899 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1501736 00:05:13.899 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:13.899 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1501736 00:05:13.899 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:13.899 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1501736 00:05:13.899 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:13.899 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.899 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:13.899 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.899 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:13.899 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.899 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:13.899 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1501736 00:05:13.899 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:13.899 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1501736 00:05:13.899 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:13.899 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.899 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:13.899 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.899 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:13.899 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1501736 00:05:13.899 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:13.899 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.899 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:13.899 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1501736 00:05:13.899 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:13.899 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1501736 00:05:13.899 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:13.899 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1501736 00:05:13.899 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:13.899 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.899 10:32:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.899 10:32:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1501736 00:05:13.899 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1501736 ']' 00:05:13.899 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1501736 00:05:13.899 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:13.899 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.899 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501736 00:05:14.158 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.158 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.158 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501736' 00:05:14.158 killing process with pid 1501736 00:05:14.158 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1501736 00:05:14.158 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1501736 00:05:14.418 00:05:14.418 real 0m1.020s 00:05:14.418 user 0m0.954s 00:05:14.418 sys 0m0.403s 00:05:14.418 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.418 10:32:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.418 ************************************ 00:05:14.418 END TEST dpdk_mem_utility 00:05:14.418 ************************************ 00:05:14.418 10:32:21 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:14.418 10:32:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.418 10:32:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.418 10:32:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.418 ************************************ 00:05:14.418 START TEST event 00:05:14.418 ************************************ 00:05:14.418 10:32:21 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:14.418 * Looking for test storage... 00:05:14.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:14.418 10:32:21 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.418 10:32:21 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.418 10:32:21 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.678 10:32:21 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.678 10:32:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.678 10:32:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.678 10:32:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.678 10:32:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.678 10:32:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.678 10:32:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.678 10:32:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.678 10:32:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.678 10:32:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.678 10:32:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.678 10:32:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.678 10:32:21 event -- scripts/common.sh@344 -- # case "$op" in 00:05:14.678 10:32:21 event -- scripts/common.sh@345 -- # : 1 00:05:14.678 10:32:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.678 10:32:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.678 10:32:21 event -- scripts/common.sh@365 -- # decimal 1 00:05:14.678 10:32:21 event -- scripts/common.sh@353 -- # local d=1 00:05:14.678 10:32:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.678 10:32:21 event -- scripts/common.sh@355 -- # echo 1 00:05:14.678 10:32:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.678 10:32:21 event -- scripts/common.sh@366 -- # decimal 2 00:05:14.678 10:32:21 event -- scripts/common.sh@353 -- # local d=2 00:05:14.678 10:32:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.678 10:32:21 event -- scripts/common.sh@355 -- # echo 2 00:05:14.678 10:32:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.678 10:32:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.678 10:32:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.678 10:32:21 event -- scripts/common.sh@368 -- # return 0 00:05:14.678 10:32:21 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.678 10:32:21 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.678 --rc genhtml_branch_coverage=1 00:05:14.678 --rc genhtml_function_coverage=1 00:05:14.678 --rc genhtml_legend=1 00:05:14.678 --rc geninfo_all_blocks=1 00:05:14.678 --rc geninfo_unexecuted_blocks=1 00:05:14.678 00:05:14.678 ' 00:05:14.678 10:32:21 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.678 --rc genhtml_branch_coverage=1 00:05:14.678 --rc genhtml_function_coverage=1 00:05:14.678 --rc genhtml_legend=1 00:05:14.678 --rc geninfo_all_blocks=1 00:05:14.678 --rc geninfo_unexecuted_blocks=1 00:05:14.678 00:05:14.678 ' 00:05:14.678 10:32:21 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.678 --rc genhtml_branch_coverage=1 00:05:14.679 --rc genhtml_function_coverage=1 00:05:14.679 --rc genhtml_legend=1 00:05:14.679 --rc geninfo_all_blocks=1 00:05:14.679 --rc geninfo_unexecuted_blocks=1 00:05:14.679 00:05:14.679 ' 00:05:14.679 10:32:21 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.679 --rc genhtml_branch_coverage=1 00:05:14.679 --rc genhtml_function_coverage=1 00:05:14.679 --rc genhtml_legend=1 00:05:14.679 --rc geninfo_all_blocks=1 00:05:14.679 --rc geninfo_unexecuted_blocks=1 00:05:14.679 00:05:14.679 ' 00:05:14.679 10:32:21 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:14.679 10:32:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:14.679 10:32:21 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.679 10:32:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:14.679 10:32:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.679 10:32:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.679 ************************************ 00:05:14.679 START TEST event_perf 00:05:14.679 ************************************ 00:05:14.679 10:32:21 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.679 Running I/O for 1 seconds...[2024-11-19 10:32:21.952461] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:14.679 [2024-11-19 10:32:21.952529] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501991 ] 00:05:14.679 [2024-11-19 10:32:22.033012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.679 [2024-11-19 10:32:22.077123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.679 [2024-11-19 10:32:22.077235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.679 [2024-11-19 10:32:22.077341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.679 [2024-11-19 10:32:22.077342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.058 Running I/O for 1 seconds... 00:05:16.058 lcore 0: 206914 00:05:16.058 lcore 1: 206912 00:05:16.058 lcore 2: 206913 00:05:16.058 lcore 3: 206913 00:05:16.058 done. 00:05:16.058 00:05:16.058 real 0m1.187s 00:05:16.058 user 0m4.096s 00:05:16.058 sys 0m0.087s 00:05:16.058 10:32:23 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.058 10:32:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.058 ************************************ 00:05:16.058 END TEST event_perf 00:05:16.058 ************************************ 00:05:16.058 10:32:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:16.058 10:32:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:16.058 10:32:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.058 10:32:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.058 ************************************ 00:05:16.058 START TEST event_reactor 00:05:16.058 ************************************ 00:05:16.058 10:32:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:16.058 [2024-11-19 10:32:23.211762] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:16.058 [2024-11-19 10:32:23.211837] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502242 ] 00:05:16.058 [2024-11-19 10:32:23.288218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.058 [2024-11-19 10:32:23.329088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.996 test_start 00:05:16.996 oneshot 00:05:16.996 tick 100 00:05:16.996 tick 100 00:05:16.996 tick 250 00:05:16.996 tick 100 00:05:16.996 tick 100 00:05:16.996 tick 100 00:05:16.996 tick 250 00:05:16.996 tick 500 00:05:16.996 tick 100 00:05:16.996 tick 100 00:05:16.996 tick 250 00:05:16.996 tick 100 00:05:16.996 tick 100 00:05:16.996 test_end 00:05:16.996 00:05:16.996 real 0m1.174s 00:05:16.996 user 0m1.092s 00:05:16.996 sys 0m0.079s 00:05:16.996 10:32:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.996 10:32:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:16.996 ************************************ 00:05:16.996 END TEST event_reactor 00:05:16.996 ************************************ 00:05:16.996 10:32:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.996 10:32:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:16.996 10:32:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.996 10:32:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.996 ************************************ 00:05:16.996 START TEST event_reactor_perf 00:05:16.996 ************************************ 00:05:16.996 10:32:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.256 [2024-11-19 10:32:24.459365] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:17.256 [2024-11-19 10:32:24.459434] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502490 ] 00:05:17.256 [2024-11-19 10:32:24.540066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.256 [2024-11-19 10:32:24.582196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.192 test_start 00:05:18.192 test_end 00:05:18.192 Performance: 496473 events per second 00:05:18.192 00:05:18.192 real 0m1.185s 00:05:18.192 user 0m1.101s 00:05:18.192 sys 0m0.079s 00:05:18.192 10:32:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.192 10:32:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.192 ************************************ 00:05:18.192 END TEST event_reactor_perf 00:05:18.192 ************************************ 00:05:18.451 10:32:25 event -- event/event.sh@49 -- # uname -s 00:05:18.451 10:32:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.451 10:32:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:18.451 10:32:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.451 10:32:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.451 10:32:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.451 ************************************ 00:05:18.451 START TEST event_scheduler 00:05:18.452 ************************************ 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:18.452 * Looking for test storage... 00:05:18.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.452 10:32:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.452 --rc genhtml_branch_coverage=1 00:05:18.452 --rc genhtml_function_coverage=1 00:05:18.452 --rc genhtml_legend=1 00:05:18.452 --rc geninfo_all_blocks=1 00:05:18.452 --rc geninfo_unexecuted_blocks=1 00:05:18.452 00:05:18.452 ' 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.452 --rc genhtml_branch_coverage=1 00:05:18.452 --rc genhtml_function_coverage=1 00:05:18.452 --rc genhtml_legend=1 00:05:18.452 --rc geninfo_all_blocks=1 00:05:18.452 --rc geninfo_unexecuted_blocks=1 00:05:18.452 00:05:18.452 ' 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.452 --rc genhtml_branch_coverage=1 00:05:18.452 --rc genhtml_function_coverage=1 00:05:18.452 --rc genhtml_legend=1 00:05:18.452 --rc geninfo_all_blocks=1 00:05:18.452 --rc geninfo_unexecuted_blocks=1 00:05:18.452 00:05:18.452 ' 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.452 --rc genhtml_branch_coverage=1 00:05:18.452 --rc genhtml_function_coverage=1 00:05:18.452 --rc genhtml_legend=1 00:05:18.452 --rc geninfo_all_blocks=1 00:05:18.452 --rc geninfo_unexecuted_blocks=1 00:05:18.452 00:05:18.452 ' 00:05:18.452 10:32:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.452 10:32:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1502776 00:05:18.452 10:32:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.452 10:32:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.452 10:32:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1502776 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1502776 ']' 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.452 10:32:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.711 [2024-11-19 10:32:25.924719] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:18.711 [2024-11-19 10:32:25.924765] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502776 ] 00:05:18.712 [2024-11-19 10:32:25.997642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.712 [2024-11-19 10:32:26.040380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.712 [2024-11-19 10:32:26.040489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.712 [2024-11-19 10:32:26.040596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.712 [2024-11-19 10:32:26.040597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:18.712 10:32:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.712 [2024-11-19 10:32:26.081186] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:18.712 [2024-11-19 10:32:26.081203] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:18.712 [2024-11-19 10:32:26.081212] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:18.712 [2024-11-19 10:32:26.081218] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:18.712 [2024-11-19 10:32:26.081223] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.712 10:32:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.712 [2024-11-19 10:32:26.155095] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.712 10:32:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.712 10:32:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 ************************************ 00:05:18.971 START TEST scheduler_create_thread 00:05:18.971 ************************************ 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 2 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 3 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 4 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 5 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 6 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 7 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 8 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 9 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 10 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.971 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.539 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.539 10:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.539 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.539 10:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.917 10:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.917 10:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.917 10:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.917 10:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.917 10:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.858 10:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.858 00:05:21.858 real 0m3.101s 00:05:21.858 user 0m0.026s 00:05:21.858 sys 0m0.003s 00:05:21.858 10:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.858 10:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.858 ************************************ 00:05:21.858 END TEST scheduler_create_thread 00:05:21.858 ************************************ 00:05:22.120 10:32:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:22.120 10:32:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1502776 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1502776 ']' 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1502776 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1502776 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1502776' 00:05:22.120 killing process with pid 1502776 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1502776 00:05:22.120 10:32:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1502776 00:05:22.379 [2024-11-19 10:32:29.674330] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.638 00:05:22.638 real 0m4.161s 00:05:22.638 user 0m6.632s 00:05:22.638 sys 0m0.371s 00:05:22.638 10:32:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.638 10:32:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.638 ************************************ 00:05:22.638 END TEST event_scheduler 00:05:22.638 ************************************ 00:05:22.638 10:32:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.638 10:32:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.638 10:32:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.638 10:32:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.638 10:32:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.638 ************************************ 00:05:22.638 START TEST app_repeat 00:05:22.638 ************************************ 00:05:22.638 10:32:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:22.638 10:32:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.638 10:32:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.638 10:32:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.638 10:32:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.638 10:32:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1503519 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1503519' 00:05:22.639 Process app_repeat pid: 1503519 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.639 spdk_app_start Round 0 00:05:22.639 10:32:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1503519 /var/tmp/spdk-nbd.sock 00:05:22.639 10:32:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1503519 ']' 00:05:22.639 10:32:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.639 10:32:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.639 10:32:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.639 10:32:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.639 10:32:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.639 [2024-11-19 10:32:29.975164] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:22.639 [2024-11-19 10:32:29.975219] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503519 ] 00:05:22.639 [2024-11-19 10:32:30.054350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.898 [2024-11-19 10:32:30.101886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.898 [2024-11-19 10:32:30.101888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.898 10:32:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.898 10:32:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.898 10:32:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.157 Malloc0 00:05:23.157 10:32:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.416 Malloc1 00:05:23.416 10:32:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.416 10:32:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.417 /dev/nbd0 00:05:23.676 10:32:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.676 10:32:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.676 1+0 records in 00:05:23.676 1+0 records out 00:05:23.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208423 s, 19.7 MB/s 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.676 10:32:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.676 10:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.676 10:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.676 10:32:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.676 /dev/nbd1 00:05:23.676 10:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.676 10:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.676 10:32:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.935 1+0 records in 00:05:23.935 1+0 records out 00:05:23.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023086 s, 17.7 MB/s 00:05:23.935 10:32:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.935 10:32:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.935 10:32:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.935 10:32:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.935 10:32:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.935 { 00:05:23.935 "nbd_device": "/dev/nbd0", 00:05:23.935 "bdev_name": "Malloc0" 00:05:23.935 }, 00:05:23.935 { 00:05:23.935 "nbd_device": "/dev/nbd1", 00:05:23.935 "bdev_name": "Malloc1" 00:05:23.935 } 00:05:23.935 ]' 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.935 { 00:05:23.935 "nbd_device": "/dev/nbd0", 00:05:23.935 "bdev_name": "Malloc0" 00:05:23.935 }, 00:05:23.935 { 00:05:23.935 "nbd_device": "/dev/nbd1", 00:05:23.935 "bdev_name": "Malloc1" 00:05:23.935 } 00:05:23.935 ]' 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.935 /dev/nbd1' 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.935 /dev/nbd1' 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.935 10:32:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.195 256+0 records in 00:05:24.195 256+0 records out 00:05:24.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106371 s, 98.6 MB/s 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.195 256+0 records in 00:05:24.195 256+0 records out 00:05:24.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138391 s, 75.8 MB/s 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.195 256+0 records in 00:05:24.195 256+0 records out 00:05:24.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153956 s, 68.1 MB/s 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.195 10:32:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.454 10:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.713 10:32:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.713 10:32:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.972 10:32:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.231 [2024-11-19 10:32:32.506086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.231 [2024-11-19 10:32:32.543383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.231 [2024-11-19 10:32:32.543385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.231 [2024-11-19 10:32:32.584713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.231 [2024-11-19 10:32:32.584754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.520 10:32:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.520 10:32:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:28.520 spdk_app_start Round 1 00:05:28.520 10:32:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1503519 /var/tmp/spdk-nbd.sock 00:05:28.520 10:32:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1503519 ']' 00:05:28.520 10:32:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.520 10:32:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.520 10:32:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.520 10:32:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.520 10:32:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.520 10:32:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.520 10:32:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.520 10:32:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.520 Malloc0 00:05:28.520 10:32:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.520 Malloc1 00:05:28.779 10:32:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.779 10:32:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.779 /dev/nbd0 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.039 1+0 records in 00:05:29.039 1+0 records out 00:05:29.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016015 s, 25.6 MB/s 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.039 /dev/nbd1 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.039 1+0 records in 00:05:29.039 1+0 records out 00:05:29.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198168 s, 20.7 MB/s 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.039 10:32:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.039 10:32:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.299 { 00:05:29.299 "nbd_device": "/dev/nbd0", 00:05:29.299 "bdev_name": "Malloc0" 00:05:29.299 }, 00:05:29.299 { 00:05:29.299 "nbd_device": "/dev/nbd1", 00:05:29.299 "bdev_name": "Malloc1" 00:05:29.299 } 00:05:29.299 ]' 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.299 { 00:05:29.299 "nbd_device": "/dev/nbd0", 00:05:29.299 "bdev_name": "Malloc0" 00:05:29.299 }, 00:05:29.299 { 00:05:29.299 "nbd_device": "/dev/nbd1", 00:05:29.299 "bdev_name": "Malloc1" 00:05:29.299 } 00:05:29.299 ]' 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.299 /dev/nbd1' 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.299 /dev/nbd1' 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.299 10:32:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.559 256+0 records in 00:05:29.559 256+0 records out 00:05:29.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010664 s, 98.3 MB/s 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.559 256+0 records in 00:05:29.559 256+0 records out 00:05:29.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137297 s, 76.4 MB/s 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.559 256+0 records in 00:05:29.559 256+0 records out 00:05:29.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149929 s, 69.9 MB/s 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:32:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.818 10:32:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.077 10:32:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.077 10:32:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.336 10:32:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.595 [2024-11-19 10:32:37.860254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.595 [2024-11-19 10:32:37.897489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.595 [2024-11-19 10:32:37.897491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.595 [2024-11-19 10:32:37.939180] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.595 [2024-11-19 10:32:37.939222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.885 10:32:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.885 10:32:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:33.885 spdk_app_start Round 2 00:05:33.885 10:32:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1503519 /var/tmp/spdk-nbd.sock 00:05:33.885 10:32:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1503519 ']' 00:05:33.885 10:32:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.885 10:32:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.885 10:32:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.885 10:32:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.885 10:32:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.885 10:32:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.885 10:32:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:33.885 10:32:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.885 Malloc0 00:05:33.885 10:32:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.885 Malloc1 00:05:34.145 10:32:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.145 /dev/nbd0 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.145 10:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.145 1+0 records in 00:05:34.145 1+0 records out 00:05:34.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00742992 s, 551 kB/s 00:05:34.145 10:32:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.404 10:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.404 10:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.404 10:32:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.404 /dev/nbd1 00:05:34.404 10:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.404 10:32:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.404 10:32:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.663 10:32:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.663 1+0 records in 00:05:34.663 1+0 records out 00:05:34.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020239 s, 20.2 MB/s 00:05:34.663 10:32:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.663 10:32:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.663 10:32:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.663 10:32:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.663 10:32:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.663 10:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.663 10:32:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.663 10:32:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.663 10:32:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.663 10:32:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.663 10:32:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.663 { 00:05:34.663 "nbd_device": "/dev/nbd0", 00:05:34.663 "bdev_name": "Malloc0" 00:05:34.663 }, 00:05:34.663 { 00:05:34.663 "nbd_device": "/dev/nbd1", 00:05:34.663 "bdev_name": "Malloc1" 00:05:34.663 } 00:05:34.663 ]' 00:05:34.663 10:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.663 10:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.664 { 00:05:34.664 "nbd_device": "/dev/nbd0", 00:05:34.664 "bdev_name": "Malloc0" 00:05:34.664 }, 00:05:34.664 { 00:05:34.664 "nbd_device": "/dev/nbd1", 00:05:34.664 "bdev_name": "Malloc1" 00:05:34.664 } 00:05:34.664 ]' 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.664 /dev/nbd1' 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.664 /dev/nbd1' 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.664 10:32:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.969 256+0 records in 00:05:34.969 256+0 records out 00:05:34.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106626 s, 98.3 MB/s 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.969 256+0 records in 00:05:34.969 256+0 records out 00:05:34.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139366 s, 75.2 MB/s 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.969 256+0 records in 00:05:34.969 256+0 records out 00:05:34.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150555 s, 69.6 MB/s 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.969 10:32:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.970 10:32:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.279 10:32:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.537 10:32:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.537 10:32:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.796 10:32:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.796 [2024-11-19 10:32:43.231453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.055 [2024-11-19 10:32:43.270253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.055 [2024-11-19 10:32:43.270253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.055 [2024-11-19 10:32:43.311662] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.056 [2024-11-19 10:32:43.311700] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.345 10:32:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1503519 /var/tmp/spdk-nbd.sock 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1503519 ']' 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.345 10:32:46 event.app_repeat -- event/event.sh@39 -- # killprocess 1503519 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1503519 ']' 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1503519 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1503519 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1503519' 00:05:39.345 killing process with pid 1503519 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1503519 00:05:39.345 10:32:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1503519 00:05:39.345 spdk_app_start is called in Round 0. 00:05:39.345 Shutdown signal received, stop current app iteration 00:05:39.345 Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization... 00:05:39.345 spdk_app_start is called in Round 1. 00:05:39.345 Shutdown signal received, stop current app iteration 00:05:39.345 Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization... 00:05:39.345 spdk_app_start is called in Round 2. 00:05:39.345 Shutdown signal received, stop current app iteration 00:05:39.345 Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization... 00:05:39.345 spdk_app_start is called in Round 3. 00:05:39.345 Shutdown signal received, stop current app iteration 00:05:39.345 10:32:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:39.345 10:32:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:39.345 00:05:39.345 real 0m16.545s 00:05:39.345 user 0m36.361s 00:05:39.346 sys 0m2.618s 00:05:39.346 10:32:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.346 10:32:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.346 ************************************ 00:05:39.346 END TEST app_repeat 00:05:39.346 ************************************ 00:05:39.346 10:32:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:39.346 10:32:46 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:39.346 10:32:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.346 10:32:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.346 10:32:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.346 ************************************ 00:05:39.346 START TEST cpu_locks 00:05:39.346 ************************************ 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:39.346 * Looking for test storage... 00:05:39.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.346 10:32:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.346 --rc genhtml_branch_coverage=1 00:05:39.346 --rc genhtml_function_coverage=1 00:05:39.346 --rc genhtml_legend=1 00:05:39.346 --rc geninfo_all_blocks=1 00:05:39.346 --rc geninfo_unexecuted_blocks=1 00:05:39.346 00:05:39.346 ' 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.346 --rc genhtml_branch_coverage=1 00:05:39.346 --rc genhtml_function_coverage=1 00:05:39.346 --rc genhtml_legend=1 00:05:39.346 --rc geninfo_all_blocks=1 00:05:39.346 --rc geninfo_unexecuted_blocks=1 00:05:39.346 00:05:39.346 ' 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.346 --rc genhtml_branch_coverage=1 00:05:39.346 --rc genhtml_function_coverage=1 00:05:39.346 --rc genhtml_legend=1 00:05:39.346 --rc geninfo_all_blocks=1 00:05:39.346 --rc geninfo_unexecuted_blocks=1 00:05:39.346 00:05:39.346 ' 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.346 --rc genhtml_branch_coverage=1 00:05:39.346 --rc genhtml_function_coverage=1 00:05:39.346 --rc genhtml_legend=1 00:05:39.346 --rc geninfo_all_blocks=1 00:05:39.346 --rc geninfo_unexecuted_blocks=1 00:05:39.346 00:05:39.346 ' 00:05:39.346 10:32:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:39.346 10:32:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:39.346 10:32:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:39.346 10:32:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.346 10:32:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.346 ************************************ 00:05:39.346 START TEST default_locks 00:05:39.346 ************************************ 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1506518 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1506518 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1506518 ']' 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.346 10:32:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.606 [2024-11-19 10:32:46.825807] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:39.606 [2024-11-19 10:32:46.825850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506518 ] 00:05:39.606 [2024-11-19 10:32:46.898742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.606 [2024-11-19 10:32:46.939757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.865 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.865 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:39.865 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1506518 00:05:39.865 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1506518 00:05:39.865 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.124 lslocks: write error 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1506518 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1506518 ']' 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1506518 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1506518 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1506518' 00:05:40.124 killing process with pid 1506518 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1506518 00:05:40.124 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1506518 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1506518 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1506518 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1506518 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1506518 ']' 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1506518) - No such process 00:05:40.384 ERROR: process (pid: 1506518) is no longer running 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.384 00:05:40.384 real 0m0.912s 00:05:40.384 user 0m0.861s 00:05:40.384 sys 0m0.434s 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.384 10:32:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.384 ************************************ 00:05:40.384 END TEST default_locks 00:05:40.384 ************************************ 00:05:40.384 10:32:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:40.384 10:32:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.384 10:32:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.384 10:32:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.384 ************************************ 00:05:40.384 START TEST default_locks_via_rpc 00:05:40.384 ************************************ 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1506774 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1506774 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1506774 ']' 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.384 10:32:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.384 [2024-11-19 10:32:47.799537] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:40.384 [2024-11-19 10:32:47.799578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506774 ] 00:05:40.644 [2024-11-19 10:32:47.876410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.644 [2024-11-19 10:32:47.919151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1506774 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1506774 00:05:40.904 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1506774 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1506774 ']' 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1506774 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1506774 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1506774' 00:05:41.163 killing process with pid 1506774 00:05:41.163 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1506774 00:05:41.164 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1506774 00:05:41.423 00:05:41.423 real 0m1.040s 00:05:41.423 user 0m0.999s 00:05:41.423 sys 0m0.471s 00:05:41.423 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.423 10:32:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.423 ************************************ 00:05:41.423 END TEST default_locks_via_rpc 00:05:41.423 ************************************ 00:05:41.423 10:32:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:41.423 10:32:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.423 10:32:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.423 10:32:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.423 ************************************ 00:05:41.423 START TEST non_locking_app_on_locked_coremask 00:05:41.423 ************************************ 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1507030 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1507030 /var/tmp/spdk.sock 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1507030 ']' 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.423 10:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.683 [2024-11-19 10:32:48.909192] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:41.683 [2024-11-19 10:32:48.909236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507030 ] 00:05:41.683 [2024-11-19 10:32:48.983020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.683 [2024-11-19 10:32:49.022181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.944 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1507033 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1507033 /var/tmp/spdk2.sock 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1507033 ']' 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.945 10:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.945 [2024-11-19 10:32:49.303931] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:41.945 [2024-11-19 10:32:49.303984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507033 ] 00:05:42.206 [2024-11-19 10:32:49.395757] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.206 [2024-11-19 10:32:49.395783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.206 [2024-11-19 10:32:49.483621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.774 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.774 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.774 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1507030 00:05:42.774 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1507030 00:05:42.774 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.341 lslocks: write error 00:05:43.341 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1507030 00:05:43.341 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1507030 ']' 00:05:43.341 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1507030 00:05:43.341 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.341 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.341 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507030 00:05:43.599 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.599 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.599 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507030' 00:05:43.599 killing process with pid 1507030 00:05:43.599 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1507030 00:05:43.599 10:32:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1507030 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1507033 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1507033 ']' 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1507033 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507033 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507033' 00:05:44.170 killing process with pid 1507033 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1507033 00:05:44.170 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1507033 00:05:44.428 00:05:44.428 real 0m2.892s 00:05:44.429 user 0m3.057s 00:05:44.429 sys 0m0.964s 00:05:44.429 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.429 10:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 ************************************ 00:05:44.429 END TEST non_locking_app_on_locked_coremask 00:05:44.429 ************************************ 00:05:44.429 10:32:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:44.429 10:32:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.429 10:32:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.429 10:32:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 ************************************ 00:05:44.429 START TEST locking_app_on_unlocked_coremask 00:05:44.429 ************************************ 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1507531 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1507531 /var/tmp/spdk.sock 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1507531 ']' 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.429 10:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 [2024-11-19 10:32:51.868008] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:44.429 [2024-11-19 10:32:51.868048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507531 ] 00:05:44.687 [2024-11-19 10:32:51.940851] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.687 [2024-11-19 10:32:51.940878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.687 [2024-11-19 10:32:51.983508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1507538 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1507538 /var/tmp/spdk2.sock 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1507538 ']' 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.945 10:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.945 [2024-11-19 10:32:52.246773] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:44.945 [2024-11-19 10:32:52.246819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507538 ] 00:05:44.945 [2024-11-19 10:32:52.332126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.204 [2024-11-19 10:32:52.418256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.771 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.771 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.771 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1507538 00:05:45.771 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1507538 00:05:45.771 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.339 lslocks: write error 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1507531 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1507531 ']' 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1507531 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507531 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507531' 00:05:46.339 killing process with pid 1507531 00:05:46.339 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1507531 00:05:46.340 10:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1507531 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1507538 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1507538 ']' 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1507538 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507538 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507538' 00:05:46.909 killing process with pid 1507538 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1507538 00:05:46.909 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1507538 00:05:47.168 00:05:47.168 real 0m2.799s 00:05:47.168 user 0m2.955s 00:05:47.168 sys 0m0.919s 00:05:47.168 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.168 10:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.168 ************************************ 00:05:47.168 END TEST locking_app_on_unlocked_coremask 00:05:47.168 ************************************ 00:05:47.428 10:32:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:47.428 10:32:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.428 10:32:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.428 10:32:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.428 ************************************ 00:05:47.428 START TEST locking_app_on_locked_coremask 00:05:47.428 ************************************ 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1508032 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1508032 /var/tmp/spdk.sock 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1508032 ']' 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.428 10:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.428 [2024-11-19 10:32:54.737521] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:47.428 [2024-11-19 10:32:54.737564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508032 ] 00:05:47.428 [2024-11-19 10:32:54.810341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.428 [2024-11-19 10:32:54.848460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1508035 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1508035 /var/tmp/spdk2.sock 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1508035 /var/tmp/spdk2.sock 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1508035 /var/tmp/spdk2.sock 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1508035 ']' 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.687 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.687 [2024-11-19 10:32:55.114018] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:47.687 [2024-11-19 10:32:55.114059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508035 ] 00:05:47.946 [2024-11-19 10:32:55.207200] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1508032 has claimed it. 00:05:47.946 [2024-11-19 10:32:55.207240] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1508035) - No such process 00:05:48.514 ERROR: process (pid: 1508035) is no longer running 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1508032 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1508032 00:05:48.514 10:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.082 lslocks: write error 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1508032 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1508032 ']' 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1508032 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508032 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508032' 00:05:49.082 killing process with pid 1508032 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1508032 00:05:49.082 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1508032 00:05:49.342 00:05:49.342 real 0m1.975s 00:05:49.342 user 0m2.119s 00:05:49.342 sys 0m0.663s 00:05:49.342 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.342 10:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.342 ************************************ 00:05:49.342 END TEST locking_app_on_locked_coremask 00:05:49.342 ************************************ 00:05:49.342 10:32:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.342 10:32:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.342 10:32:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.342 10:32:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.342 ************************************ 00:05:49.342 START TEST locking_overlapped_coremask 00:05:49.342 ************************************ 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1508393 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1508393 /var/tmp/spdk.sock 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1508393 ']' 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.342 10:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.342 [2024-11-19 10:32:56.783746] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:49.342 [2024-11-19 10:32:56.783793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508393 ] 00:05:49.602 [2024-11-19 10:32:56.861795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.602 [2024-11-19 10:32:56.908373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.602 [2024-11-19 10:32:56.908509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.602 [2024-11-19 10:32:56.908510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.170 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1508538 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1508538 /var/tmp/spdk2.sock 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1508538 /var/tmp/spdk2.sock 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1508538 /var/tmp/spdk2.sock 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1508538 ']' 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.429 10:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.429 [2024-11-19 10:32:57.660050] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:50.429 [2024-11-19 10:32:57.660097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508538 ] 00:05:50.429 [2024-11-19 10:32:57.752750] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1508393 has claimed it. 00:05:50.429 [2024-11-19 10:32:57.752790] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1508538) - No such process 00:05:50.998 ERROR: process (pid: 1508538) is no longer running 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1508393 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1508393 ']' 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1508393 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508393 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508393' 00:05:50.998 killing process with pid 1508393 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1508393 00:05:50.998 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1508393 00:05:51.258 00:05:51.258 real 0m1.939s 00:05:51.258 user 0m5.573s 00:05:51.258 sys 0m0.420s 00:05:51.258 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.258 10:32:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.258 ************************************ 00:05:51.258 END TEST locking_overlapped_coremask 00:05:51.258 ************************************ 00:05:51.258 10:32:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:51.258 10:32:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.258 10:32:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.258 10:32:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.518 ************************************ 00:05:51.518 START TEST locking_overlapped_coremask_via_rpc 00:05:51.518 ************************************ 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1508794 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1508794 /var/tmp/spdk.sock 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1508794 ']' 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.518 10:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.518 [2024-11-19 10:32:58.790568] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:51.518 [2024-11-19 10:32:58.790608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508794 ] 00:05:51.518 [2024-11-19 10:32:58.866487] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.518 [2024-11-19 10:32:58.866512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.518 [2024-11-19 10:32:58.911668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.518 [2024-11-19 10:32:58.911775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.518 [2024-11-19 10:32:58.911775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1508800 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1508800 /var/tmp/spdk2.sock 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1508800 ']' 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.778 10:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.778 [2024-11-19 10:32:59.176368] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:51.778 [2024-11-19 10:32:59.176417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508800 ] 00:05:52.037 [2024-11-19 10:32:59.267877] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.037 [2024-11-19 10:32:59.267905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.037 [2024-11-19 10:32:59.356257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.037 [2024-11-19 10:32:59.360069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.037 [2024-11-19 10:32:59.360070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.606 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.606 [2024-11-19 10:33:00.030046] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1508794 has claimed it. 00:05:52.606 request: 00:05:52.606 { 00:05:52.606 "method": "framework_enable_cpumask_locks", 00:05:52.606 "req_id": 1 00:05:52.606 } 00:05:52.606 Got JSON-RPC error response 00:05:52.606 response: 00:05:52.606 { 00:05:52.606 "code": -32603, 00:05:52.607 "message": "Failed to claim CPU core: 2" 00:05:52.607 } 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1508794 /var/tmp/spdk.sock 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1508794 ']' 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.607 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1508800 /var/tmp/spdk2.sock 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1508800 ']' 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.866 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.126 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.126 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.126 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:53.126 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.126 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.126 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.126 00:05:53.126 real 0m1.724s 00:05:53.126 user 0m0.830s 00:05:53.126 sys 0m0.140s 00:05:53.126 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.126 10:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.126 ************************************ 00:05:53.126 END TEST locking_overlapped_coremask_via_rpc 00:05:53.126 ************************************ 00:05:53.126 10:33:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:53.126 10:33:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1508794 ]] 00:05:53.126 10:33:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1508794 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1508794 ']' 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1508794 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508794 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508794' 00:05:53.126 killing process with pid 1508794 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1508794 00:05:53.126 10:33:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1508794 00:05:53.696 10:33:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1508800 ]] 00:05:53.696 10:33:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1508800 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1508800 ']' 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1508800 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508800 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508800' 00:05:53.696 killing process with pid 1508800 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1508800 00:05:53.696 10:33:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1508800 00:05:53.956 10:33:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:53.956 10:33:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:53.956 10:33:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1508794 ]] 00:05:53.956 10:33:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1508794 00:05:53.956 10:33:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1508794 ']' 00:05:53.956 10:33:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1508794 00:05:53.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1508794) - No such process 00:05:53.956 10:33:01 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1508794 is not found' 00:05:53.956 Process with pid 1508794 is not found 00:05:53.956 10:33:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1508800 ]] 00:05:53.956 10:33:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1508800 00:05:53.956 10:33:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1508800 ']' 00:05:53.956 10:33:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1508800 00:05:53.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1508800) - No such process 00:05:53.956 10:33:01 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1508800 is not found' 00:05:53.956 Process with pid 1508800 is not found 00:05:53.956 10:33:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:53.956 00:05:53.956 real 0m14.673s 00:05:53.956 user 0m26.223s 00:05:53.956 sys 0m4.966s 00:05:53.956 10:33:01 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.956 10:33:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.956 ************************************ 00:05:53.956 END TEST cpu_locks 00:05:53.956 ************************************ 00:05:53.956 00:05:53.956 real 0m39.546s 00:05:53.956 user 1m15.778s 00:05:53.956 sys 0m8.585s 00:05:53.956 10:33:01 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.956 10:33:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.956 ************************************ 00:05:53.956 END TEST event 00:05:53.956 ************************************ 00:05:53.956 10:33:01 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:53.956 10:33:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.956 10:33:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.956 10:33:01 -- common/autotest_common.sh@10 -- # set +x 00:05:53.956 ************************************ 00:05:53.956 START TEST thread 00:05:53.956 ************************************ 00:05:53.956 10:33:01 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:54.215 * Looking for test storage... 00:05:54.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:54.215 10:33:01 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.215 10:33:01 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.215 10:33:01 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.215 10:33:01 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.215 10:33:01 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.215 10:33:01 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.215 10:33:01 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.215 10:33:01 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.215 10:33:01 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.215 10:33:01 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.215 10:33:01 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.215 10:33:01 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.215 10:33:01 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.215 10:33:01 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.215 10:33:01 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.215 10:33:01 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:54.215 10:33:01 thread -- scripts/common.sh@345 -- # : 1 00:05:54.215 10:33:01 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.215 10:33:01 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.215 10:33:01 thread -- scripts/common.sh@365 -- # decimal 1 00:05:54.215 10:33:01 thread -- scripts/common.sh@353 -- # local d=1 00:05:54.215 10:33:01 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.215 10:33:01 thread -- scripts/common.sh@355 -- # echo 1 00:05:54.215 10:33:01 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.215 10:33:01 thread -- scripts/common.sh@366 -- # decimal 2 00:05:54.215 10:33:01 thread -- scripts/common.sh@353 -- # local d=2 00:05:54.215 10:33:01 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.215 10:33:01 thread -- scripts/common.sh@355 -- # echo 2 00:05:54.215 10:33:01 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.215 10:33:01 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.215 10:33:01 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.215 10:33:01 thread -- scripts/common.sh@368 -- # return 0 00:05:54.216 10:33:01 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.216 10:33:01 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.216 --rc genhtml_branch_coverage=1 00:05:54.216 --rc genhtml_function_coverage=1 00:05:54.216 --rc genhtml_legend=1 00:05:54.216 --rc geninfo_all_blocks=1 00:05:54.216 --rc geninfo_unexecuted_blocks=1 00:05:54.216 00:05:54.216 ' 00:05:54.216 10:33:01 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.216 --rc genhtml_branch_coverage=1 00:05:54.216 --rc genhtml_function_coverage=1 00:05:54.216 --rc genhtml_legend=1 00:05:54.216 --rc geninfo_all_blocks=1 00:05:54.216 --rc geninfo_unexecuted_blocks=1 00:05:54.216 00:05:54.216 ' 00:05:54.216 10:33:01 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.216 --rc genhtml_branch_coverage=1 00:05:54.216 --rc genhtml_function_coverage=1 00:05:54.216 --rc genhtml_legend=1 00:05:54.216 --rc geninfo_all_blocks=1 00:05:54.216 --rc geninfo_unexecuted_blocks=1 00:05:54.216 00:05:54.216 ' 00:05:54.216 10:33:01 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.216 --rc genhtml_branch_coverage=1 00:05:54.216 --rc genhtml_function_coverage=1 00:05:54.216 --rc genhtml_legend=1 00:05:54.216 --rc geninfo_all_blocks=1 00:05:54.216 --rc geninfo_unexecuted_blocks=1 00:05:54.216 00:05:54.216 ' 00:05:54.216 10:33:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.216 10:33:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:54.216 10:33:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.216 10:33:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.216 ************************************ 00:05:54.216 START TEST thread_poller_perf 00:05:54.216 ************************************ 00:05:54.216 10:33:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.216 [2024-11-19 10:33:01.559054] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:54.216 [2024-11-19 10:33:01.559122] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509466 ] 00:05:54.216 [2024-11-19 10:33:01.640505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.475 [2024-11-19 10:33:01.683303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.475 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:55.412 [2024-11-19T09:33:02.861Z] ====================================== 00:05:55.412 [2024-11-19T09:33:02.861Z] busy:2305913952 (cyc) 00:05:55.412 [2024-11-19T09:33:02.861Z] total_run_count: 401000 00:05:55.412 [2024-11-19T09:33:02.861Z] tsc_hz: 2300000000 (cyc) 00:05:55.412 [2024-11-19T09:33:02.861Z] ====================================== 00:05:55.412 [2024-11-19T09:33:02.861Z] poller_cost: 5750 (cyc), 2500 (nsec) 00:05:55.412 00:05:55.412 real 0m1.194s 00:05:55.412 user 0m1.109s 00:05:55.412 sys 0m0.080s 00:05:55.412 10:33:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.412 10:33:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.412 ************************************ 00:05:55.412 END TEST thread_poller_perf 00:05:55.412 ************************************ 00:05:55.412 10:33:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:55.412 10:33:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:55.412 10:33:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.412 10:33:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.412 ************************************ 00:05:55.412 START TEST thread_poller_perf 00:05:55.412 ************************************ 00:05:55.412 10:33:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:55.412 [2024-11-19 10:33:02.820065] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:55.412 [2024-11-19 10:33:02.820122] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509739 ] 00:05:55.671 [2024-11-19 10:33:02.896029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.671 [2024-11-19 10:33:02.937285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.671 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:56.609 [2024-11-19T09:33:04.058Z] ====================================== 00:05:56.609 [2024-11-19T09:33:04.058Z] busy:2301462158 (cyc) 00:05:56.609 [2024-11-19T09:33:04.058Z] total_run_count: 5314000 00:05:56.609 [2024-11-19T09:33:04.058Z] tsc_hz: 2300000000 (cyc) 00:05:56.609 [2024-11-19T09:33:04.058Z] ====================================== 00:05:56.609 [2024-11-19T09:33:04.058Z] poller_cost: 433 (cyc), 188 (nsec) 00:05:56.609 00:05:56.609 real 0m1.180s 00:05:56.609 user 0m1.103s 00:05:56.609 sys 0m0.072s 00:05:56.609 10:33:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.609 10:33:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.609 ************************************ 00:05:56.609 END TEST thread_poller_perf 00:05:56.609 ************************************ 00:05:56.609 10:33:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:56.609 00:05:56.609 real 0m2.681s 00:05:56.609 user 0m2.361s 00:05:56.609 sys 0m0.330s 00:05:56.609 10:33:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.609 10:33:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.609 ************************************ 00:05:56.609 END TEST thread 00:05:56.609 ************************************ 00:05:56.609 10:33:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:56.609 10:33:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:56.609 10:33:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.609 10:33:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.609 10:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:56.869 ************************************ 00:05:56.869 START TEST app_cmdline 00:05:56.869 ************************************ 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:56.869 * Looking for test storage... 00:05:56.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.869 10:33:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.869 --rc genhtml_branch_coverage=1 00:05:56.869 --rc genhtml_function_coverage=1 00:05:56.869 --rc genhtml_legend=1 00:05:56.869 --rc geninfo_all_blocks=1 00:05:56.869 --rc geninfo_unexecuted_blocks=1 00:05:56.869 00:05:56.869 ' 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.869 --rc genhtml_branch_coverage=1 00:05:56.869 --rc genhtml_function_coverage=1 00:05:56.869 --rc genhtml_legend=1 00:05:56.869 --rc geninfo_all_blocks=1 00:05:56.869 --rc geninfo_unexecuted_blocks=1 00:05:56.869 00:05:56.869 ' 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.869 --rc genhtml_branch_coverage=1 00:05:56.869 --rc genhtml_function_coverage=1 00:05:56.869 --rc genhtml_legend=1 00:05:56.869 --rc geninfo_all_blocks=1 00:05:56.869 --rc geninfo_unexecuted_blocks=1 00:05:56.869 00:05:56.869 ' 00:05:56.869 10:33:04 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.869 --rc genhtml_branch_coverage=1 00:05:56.869 --rc genhtml_function_coverage=1 00:05:56.869 --rc genhtml_legend=1 00:05:56.869 --rc geninfo_all_blocks=1 00:05:56.869 --rc geninfo_unexecuted_blocks=1 00:05:56.869 00:05:56.869 ' 00:05:56.870 10:33:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:56.870 10:33:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1510035 00:05:56.870 10:33:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1510035 00:05:56.870 10:33:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:56.870 10:33:04 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1510035 ']' 00:05:56.870 10:33:04 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.870 10:33:04 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.870 10:33:04 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.870 10:33:04 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.870 10:33:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.870 [2024-11-19 10:33:04.308078] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:05:56.870 [2024-11-19 10:33:04.308127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510035 ] 00:05:57.129 [2024-11-19 10:33:04.382325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.129 [2024-11-19 10:33:04.423214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.389 10:33:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.389 10:33:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:57.389 10:33:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:57.389 { 00:05:57.389 "version": "SPDK v25.01-pre git sha1 a0c128549", 00:05:57.389 "fields": { 00:05:57.389 "major": 25, 00:05:57.389 "minor": 1, 00:05:57.389 "patch": 0, 00:05:57.389 "suffix": "-pre", 00:05:57.389 "commit": "a0c128549" 00:05:57.389 } 00:05:57.389 } 00:05:57.389 10:33:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:57.389 10:33:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:57.389 10:33:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:57.389 10:33:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:57.648 10:33:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:57.648 10:33:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.648 10:33:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.648 10:33:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:57.648 10:33:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:57.648 10:33:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:57.648 10:33:04 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.648 request: 00:05:57.648 { 00:05:57.648 "method": "env_dpdk_get_mem_stats", 00:05:57.648 "req_id": 1 00:05:57.648 } 00:05:57.648 Got JSON-RPC error response 00:05:57.648 response: 00:05:57.648 { 00:05:57.648 "code": -32601, 00:05:57.648 "message": "Method not found" 00:05:57.648 } 00:05:57.648 10:33:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:57.648 10:33:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.648 10:33:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.648 10:33:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.648 10:33:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1510035 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1510035 ']' 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1510035 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1510035 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1510035' 00:05:57.908 killing process with pid 1510035 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 1510035 00:05:57.908 10:33:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 1510035 00:05:58.167 00:05:58.167 real 0m1.368s 00:05:58.167 user 0m1.601s 00:05:58.167 sys 0m0.459s 00:05:58.167 10:33:05 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.167 10:33:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.167 ************************************ 00:05:58.167 END TEST app_cmdline 00:05:58.167 ************************************ 00:05:58.167 10:33:05 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:58.167 10:33:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.167 10:33:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.167 10:33:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.167 ************************************ 00:05:58.167 START TEST version 00:05:58.167 ************************************ 00:05:58.167 10:33:05 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:58.167 * Looking for test storage... 00:05:58.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:58.167 10:33:05 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.167 10:33:05 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.167 10:33:05 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.426 10:33:05 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.426 10:33:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.426 10:33:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.426 10:33:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.426 10:33:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.426 10:33:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.426 10:33:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.426 10:33:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.426 10:33:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.426 10:33:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.426 10:33:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.426 10:33:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.426 10:33:05 version -- scripts/common.sh@344 -- # case "$op" in 00:05:58.426 10:33:05 version -- scripts/common.sh@345 -- # : 1 00:05:58.426 10:33:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.426 10:33:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.426 10:33:05 version -- scripts/common.sh@365 -- # decimal 1 00:05:58.426 10:33:05 version -- scripts/common.sh@353 -- # local d=1 00:05:58.426 10:33:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.426 10:33:05 version -- scripts/common.sh@355 -- # echo 1 00:05:58.426 10:33:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.426 10:33:05 version -- scripts/common.sh@366 -- # decimal 2 00:05:58.426 10:33:05 version -- scripts/common.sh@353 -- # local d=2 00:05:58.426 10:33:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.426 10:33:05 version -- scripts/common.sh@355 -- # echo 2 00:05:58.426 10:33:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.426 10:33:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.426 10:33:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.426 10:33:05 version -- scripts/common.sh@368 -- # return 0 00:05:58.426 10:33:05 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.426 10:33:05 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.426 --rc genhtml_branch_coverage=1 00:05:58.426 --rc genhtml_function_coverage=1 00:05:58.426 --rc genhtml_legend=1 00:05:58.426 --rc geninfo_all_blocks=1 00:05:58.426 --rc geninfo_unexecuted_blocks=1 00:05:58.426 00:05:58.426 ' 00:05:58.426 10:33:05 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.426 --rc genhtml_branch_coverage=1 00:05:58.426 --rc genhtml_function_coverage=1 00:05:58.426 --rc genhtml_legend=1 00:05:58.426 --rc geninfo_all_blocks=1 00:05:58.426 --rc geninfo_unexecuted_blocks=1 00:05:58.426 00:05:58.426 ' 00:05:58.426 10:33:05 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.426 --rc genhtml_branch_coverage=1 00:05:58.426 --rc genhtml_function_coverage=1 00:05:58.426 --rc genhtml_legend=1 00:05:58.426 --rc geninfo_all_blocks=1 00:05:58.426 --rc geninfo_unexecuted_blocks=1 00:05:58.426 00:05:58.426 ' 00:05:58.426 10:33:05 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.426 --rc genhtml_branch_coverage=1 00:05:58.426 --rc genhtml_function_coverage=1 00:05:58.426 --rc genhtml_legend=1 00:05:58.426 --rc geninfo_all_blocks=1 00:05:58.426 --rc geninfo_unexecuted_blocks=1 00:05:58.426 00:05:58.427 ' 00:05:58.427 10:33:05 version -- app/version.sh@17 -- # get_header_version major 00:05:58.427 10:33:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.427 10:33:05 version -- app/version.sh@14 -- # cut -f2 00:05:58.427 10:33:05 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.427 10:33:05 version -- app/version.sh@17 -- # major=25 00:05:58.427 10:33:05 version -- app/version.sh@18 -- # get_header_version minor 00:05:58.427 10:33:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.427 10:33:05 version -- app/version.sh@14 -- # cut -f2 00:05:58.427 10:33:05 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.427 10:33:05 version -- app/version.sh@18 -- # minor=1 00:05:58.427 10:33:05 version -- app/version.sh@19 -- # get_header_version patch 00:05:58.427 10:33:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.427 10:33:05 version -- app/version.sh@14 -- # cut -f2 00:05:58.427 10:33:05 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.427 10:33:05 version -- app/version.sh@19 -- # patch=0 00:05:58.427 10:33:05 version -- app/version.sh@20 -- # get_header_version suffix 00:05:58.427 10:33:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.427 10:33:05 version -- app/version.sh@14 -- # cut -f2 00:05:58.427 10:33:05 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.427 10:33:05 version -- app/version.sh@20 -- # suffix=-pre 00:05:58.427 10:33:05 version -- app/version.sh@22 -- # version=25.1 00:05:58.427 10:33:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:58.427 10:33:05 version -- app/version.sh@28 -- # version=25.1rc0 00:05:58.427 10:33:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:58.427 10:33:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:58.427 10:33:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:58.427 10:33:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:58.427 00:05:58.427 real 0m0.246s 00:05:58.427 user 0m0.155s 00:05:58.427 sys 0m0.134s 00:05:58.427 10:33:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.427 10:33:05 version -- common/autotest_common.sh@10 -- # set +x 00:05:58.427 ************************************ 00:05:58.427 END TEST version 00:05:58.427 ************************************ 00:05:58.427 10:33:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:58.427 10:33:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:58.427 10:33:05 -- spdk/autotest.sh@194 -- # uname -s 00:05:58.427 10:33:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:58.427 10:33:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:58.427 10:33:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:58.427 10:33:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:58.427 10:33:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:58.427 10:33:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:58.427 10:33:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:58.427 10:33:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.427 10:33:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:58.427 10:33:05 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:58.427 10:33:05 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:58.427 10:33:05 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:58.427 10:33:05 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:58.427 10:33:05 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:58.427 10:33:05 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:58.427 10:33:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:58.427 10:33:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.427 10:33:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.427 ************************************ 00:05:58.427 START TEST nvmf_tcp 00:05:58.427 ************************************ 00:05:58.687 10:33:05 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:58.687 * Looking for test storage... 00:05:58.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:58.687 10:33:05 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.687 10:33:05 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.687 10:33:05 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.687 10:33:06 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.687 10:33:06 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:58.687 10:33:06 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.687 10:33:06 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.687 --rc genhtml_branch_coverage=1 00:05:58.687 --rc genhtml_function_coverage=1 00:05:58.687 --rc genhtml_legend=1 00:05:58.687 --rc geninfo_all_blocks=1 00:05:58.687 --rc geninfo_unexecuted_blocks=1 00:05:58.687 00:05:58.687 ' 00:05:58.687 10:33:06 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.687 --rc genhtml_branch_coverage=1 00:05:58.687 --rc genhtml_function_coverage=1 00:05:58.687 --rc genhtml_legend=1 00:05:58.687 --rc geninfo_all_blocks=1 00:05:58.687 --rc geninfo_unexecuted_blocks=1 00:05:58.687 00:05:58.687 ' 00:05:58.687 10:33:06 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.687 --rc genhtml_branch_coverage=1 00:05:58.687 --rc genhtml_function_coverage=1 00:05:58.688 --rc genhtml_legend=1 00:05:58.688 --rc geninfo_all_blocks=1 00:05:58.688 --rc geninfo_unexecuted_blocks=1 00:05:58.688 00:05:58.688 ' 00:05:58.688 10:33:06 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.688 --rc genhtml_branch_coverage=1 00:05:58.688 --rc genhtml_function_coverage=1 00:05:58.688 --rc genhtml_legend=1 00:05:58.688 --rc geninfo_all_blocks=1 00:05:58.688 --rc geninfo_unexecuted_blocks=1 00:05:58.688 00:05:58.688 ' 00:05:58.688 10:33:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:58.688 10:33:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:58.688 10:33:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:58.688 10:33:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:58.688 10:33:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.688 10:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.688 ************************************ 00:05:58.688 START TEST nvmf_target_core 00:05:58.688 ************************************ 00:05:58.688 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:58.948 * Looking for test storage... 00:05:58.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.948 --rc genhtml_branch_coverage=1 00:05:58.948 --rc genhtml_function_coverage=1 00:05:58.948 --rc genhtml_legend=1 00:05:58.948 --rc geninfo_all_blocks=1 00:05:58.948 --rc geninfo_unexecuted_blocks=1 00:05:58.948 00:05:58.948 ' 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.948 --rc genhtml_branch_coverage=1 00:05:58.948 --rc genhtml_function_coverage=1 00:05:58.948 --rc genhtml_legend=1 00:05:58.948 --rc geninfo_all_blocks=1 00:05:58.948 --rc geninfo_unexecuted_blocks=1 00:05:58.948 00:05:58.948 ' 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.948 --rc genhtml_branch_coverage=1 00:05:58.948 --rc genhtml_function_coverage=1 00:05:58.948 --rc genhtml_legend=1 00:05:58.948 --rc geninfo_all_blocks=1 00:05:58.948 --rc geninfo_unexecuted_blocks=1 00:05:58.948 00:05:58.948 ' 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.948 --rc genhtml_branch_coverage=1 00:05:58.948 --rc genhtml_function_coverage=1 00:05:58.948 --rc genhtml_legend=1 00:05:58.948 --rc geninfo_all_blocks=1 00:05:58.948 --rc geninfo_unexecuted_blocks=1 00:05:58.948 00:05:58.948 ' 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.948 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:58.949 ************************************ 00:05:58.949 START TEST nvmf_abort 00:05:58.949 ************************************ 00:05:58.949 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:59.208 * Looking for test storage... 00:05:59.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.208 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.209 --rc genhtml_branch_coverage=1 00:05:59.209 --rc genhtml_function_coverage=1 00:05:59.209 --rc genhtml_legend=1 00:05:59.209 --rc geninfo_all_blocks=1 00:05:59.209 --rc geninfo_unexecuted_blocks=1 00:05:59.209 00:05:59.209 ' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.209 --rc genhtml_branch_coverage=1 00:05:59.209 --rc genhtml_function_coverage=1 00:05:59.209 --rc genhtml_legend=1 00:05:59.209 --rc geninfo_all_blocks=1 00:05:59.209 --rc geninfo_unexecuted_blocks=1 00:05:59.209 00:05:59.209 ' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.209 --rc genhtml_branch_coverage=1 00:05:59.209 --rc genhtml_function_coverage=1 00:05:59.209 --rc genhtml_legend=1 00:05:59.209 --rc geninfo_all_blocks=1 00:05:59.209 --rc geninfo_unexecuted_blocks=1 00:05:59.209 00:05:59.209 ' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.209 --rc genhtml_branch_coverage=1 00:05:59.209 --rc genhtml_function_coverage=1 00:05:59.209 --rc genhtml_legend=1 00:05:59.209 --rc geninfo_all_blocks=1 00:05:59.209 --rc geninfo_unexecuted_blocks=1 00:05:59.209 00:05:59.209 ' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:59.209 10:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:05.784 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:05.785 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:05.785 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:05.785 Found net devices under 0000:86:00.0: cvl_0_0 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:05.785 Found net devices under 0000:86:00.1: cvl_0_1 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:05.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:05.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:06:05.785 00:06:05.785 --- 10.0.0.2 ping statistics --- 00:06:05.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.785 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:05.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:05.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:06:05.785 00:06:05.785 --- 10.0.0.1 ping statistics --- 00:06:05.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.785 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:05.785 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1514106 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1514106 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1514106 ']' 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.786 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.786 [2024-11-19 10:33:12.611533] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:05.786 [2024-11-19 10:33:12.611576] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:05.786 [2024-11-19 10:33:12.688862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.786 [2024-11-19 10:33:12.730716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:05.786 [2024-11-19 10:33:12.730755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:05.786 [2024-11-19 10:33:12.730762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.786 [2024-11-19 10:33:12.730768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.786 [2024-11-19 10:33:12.730773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:05.786 [2024-11-19 10:33:12.732244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.786 [2024-11-19 10:33:12.732350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.786 [2024-11-19 10:33:12.732351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.045 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 [2024-11-19 10:33:13.498435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 Malloc0 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 Delay0 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 [2024-11-19 10:33:13.583763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.304 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:06.304 [2024-11-19 10:33:13.679718] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:08.850 Initializing NVMe Controllers 00:06:08.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:08.850 controller IO queue size 128 less than required 00:06:08.850 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:08.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:08.850 Initialization complete. Launching workers. 00:06:08.850 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36664 00:06:08.850 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36725, failed to submit 62 00:06:08.850 success 36668, unsuccessful 57, failed 0 00:06:08.850 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:08.850 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.850 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.850 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.850 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:08.850 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:08.850 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:08.851 rmmod nvme_tcp 00:06:08.851 rmmod nvme_fabrics 00:06:08.851 rmmod nvme_keyring 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1514106 ']' 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1514106 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1514106 ']' 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1514106 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1514106 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1514106' 00:06:08.851 killing process with pid 1514106 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1514106 00:06:08.851 10:33:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1514106 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.851 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.859 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:10.859 00:06:10.859 real 0m11.892s 00:06:10.859 user 0m13.820s 00:06:10.859 sys 0m5.523s 00:06:10.859 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.859 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.859 ************************************ 00:06:10.859 END TEST nvmf_abort 00:06:10.859 ************************************ 00:06:10.859 10:33:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:10.859 10:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.859 10:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.859 10:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:11.119 ************************************ 00:06:11.119 START TEST nvmf_ns_hotplug_stress 00:06:11.119 ************************************ 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:11.119 * Looking for test storage... 00:06:11.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.119 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.119 --rc genhtml_branch_coverage=1 00:06:11.119 --rc genhtml_function_coverage=1 00:06:11.119 --rc genhtml_legend=1 00:06:11.119 --rc geninfo_all_blocks=1 00:06:11.119 --rc geninfo_unexecuted_blocks=1 00:06:11.119 00:06:11.119 ' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.120 --rc genhtml_branch_coverage=1 00:06:11.120 --rc genhtml_function_coverage=1 00:06:11.120 --rc genhtml_legend=1 00:06:11.120 --rc geninfo_all_blocks=1 00:06:11.120 --rc geninfo_unexecuted_blocks=1 00:06:11.120 00:06:11.120 ' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.120 --rc genhtml_branch_coverage=1 00:06:11.120 --rc genhtml_function_coverage=1 00:06:11.120 --rc genhtml_legend=1 00:06:11.120 --rc geninfo_all_blocks=1 00:06:11.120 --rc geninfo_unexecuted_blocks=1 00:06:11.120 00:06:11.120 ' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.120 --rc genhtml_branch_coverage=1 00:06:11.120 --rc genhtml_function_coverage=1 00:06:11.120 --rc genhtml_legend=1 00:06:11.120 --rc geninfo_all_blocks=1 00:06:11.120 --rc geninfo_unexecuted_blocks=1 00:06:11.120 00:06:11.120 ' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:11.120 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:17.689 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:17.690 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:17.690 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:17.690 Found net devices under 0000:86:00.0: cvl_0_0 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:17.690 Found net devices under 0000:86:00.1: cvl_0_1 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:17.690 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:17.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:06:17.690 00:06:17.690 --- 10.0.0.2 ping statistics --- 00:06:17.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.691 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:06:17.691 00:06:17.691 --- 10.0.0.1 ping statistics --- 00:06:17.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.691 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1518157 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1518157 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1518157 ']' 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:17.691 [2024-11-19 10:33:24.614676] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:06:17.691 [2024-11-19 10:33:24.614729] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.691 [2024-11-19 10:33:24.697931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.691 [2024-11-19 10:33:24.741012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.691 [2024-11-19 10:33:24.741050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.691 [2024-11-19 10:33:24.741057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.691 [2024-11-19 10:33:24.741063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.691 [2024-11-19 10:33:24.741068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.691 [2024-11-19 10:33:24.742361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.691 [2024-11-19 10:33:24.742403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.691 [2024-11-19 10:33:24.742403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:17.691 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:17.691 [2024-11-19 10:33:25.059315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.691 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:17.950 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:18.208 [2024-11-19 10:33:25.456745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.208 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:18.467 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:18.467 Malloc0 00:06:18.467 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:18.726 Delay0 00:06:18.726 10:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.983 10:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:19.242 NULL1 00:06:19.242 10:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:19.500 10:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1518628 00:06:19.500 10:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:19.500 10:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:19.500 10:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.436 Read completed with error (sct=0, sc=11) 00:06:20.436 10:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.695 10:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:20.695 10:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:20.953 true 00:06:20.953 10:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:20.953 10:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.889 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.889 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:21.889 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:22.148 true 00:06:22.148 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:22.148 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.409 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.668 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:22.668 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:22.668 true 00:06:22.668 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:22.668 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.045 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.045 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:24.045 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:24.304 true 00:06:24.304 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:24.304 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.240 10:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.240 10:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:25.240 10:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:25.498 true 00:06:25.498 10:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:25.499 10:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.758 10:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.758 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:25.758 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:26.017 true 00:06:26.017 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:26.017 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.395 10:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.395 10:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:27.395 10:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:27.395 true 00:06:27.654 10:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:27.654 10:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.590 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.590 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:28.590 10:33:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:28.849 true 00:06:28.849 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:28.849 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.849 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.108 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:29.108 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:29.367 true 00:06:29.367 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:29.367 10:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.746 10:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.746 10:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:30.746 10:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:31.005 true 00:06:31.005 10:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:31.005 10:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.943 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.943 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:31.943 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:32.201 true 00:06:32.201 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:32.201 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.460 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.460 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:32.460 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:32.719 true 00:06:32.719 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:32.719 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.096 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.096 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:34.096 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:34.355 true 00:06:34.355 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:34.355 10:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.292 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.292 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:35.292 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:35.552 true 00:06:35.552 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:35.552 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.811 10:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.811 10:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:35.811 10:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:36.070 true 00:06:36.070 10:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:36.070 10:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.447 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.447 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:37.447 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:37.706 true 00:06:37.706 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:37.706 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.644 10:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.644 10:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:38.644 10:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:38.903 true 00:06:38.903 10:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:38.903 10:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.258 10:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.258 10:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:39.258 10:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:39.567 true 00:06:39.567 10:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:39.567 10:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.505 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.764 10:33:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:40.764 10:33:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:40.764 true 00:06:41.023 10:33:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:41.023 10:33:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.591 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.850 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:41.850 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:42.109 true 00:06:42.109 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:42.109 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.368 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.627 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:42.627 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:42.627 true 00:06:42.627 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:42.627 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.004 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.004 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:44.004 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:44.264 true 00:06:44.264 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:44.264 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.201 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.201 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:45.201 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:45.459 true 00:06:45.459 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:45.459 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.719 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.977 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:45.978 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:45.978 true 00:06:45.978 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:45.978 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.355 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.355 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:47.355 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:47.614 true 00:06:47.614 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:47.615 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.553 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.553 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:48.553 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:48.812 true 00:06:48.812 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:48.812 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.071 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.331 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:49.331 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:49.591 true 00:06:49.591 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:49.591 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.528 Initializing NVMe Controllers 00:06:50.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:50.528 Controller IO queue size 128, less than required. 00:06:50.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:50.528 Controller IO queue size 128, less than required. 00:06:50.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:50.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:50.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:50.528 Initialization complete. Launching workers. 00:06:50.528 ======================================================== 00:06:50.528 Latency(us) 00:06:50.528 Device Information : IOPS MiB/s Average min max 00:06:50.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2132.80 1.04 43805.21 2441.99 1013449.75 00:06:50.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17677.67 8.63 7240.46 1601.19 376066.49 00:06:50.528 ======================================================== 00:06:50.528 Total : 19810.47 9.67 11177.03 1601.19 1013449.75 00:06:50.528 00:06:50.528 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.787 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:50.787 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:51.046 true 00:06:51.046 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1518628 00:06:51.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1518628) - No such process 00:06:51.046 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1518628 00:06:51.046 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.304 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.305 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:51.305 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:51.305 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:51.305 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.305 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:51.564 null0 00:06:51.564 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.564 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.564 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:51.823 null1 00:06:51.823 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.823 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.823 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:52.082 null2 00:06:52.082 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.082 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.082 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:52.082 null3 00:06:52.341 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.341 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.341 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:52.341 null4 00:06:52.341 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.341 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.341 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:52.601 null5 00:06:52.601 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.601 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.601 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:52.859 null6 00:06:52.859 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.860 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.860 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:53.119 null7 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:53.119 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1524246 1524247 1524249 1524252 1524253 1524255 1524257 1524260 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.120 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.380 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.639 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.639 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.640 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.640 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.640 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.640 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.640 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.640 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.899 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.158 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.158 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.158 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.158 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.158 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.158 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.158 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.158 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.417 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.676 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.676 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.676 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.934 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.934 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.935 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.935 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.935 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.935 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.935 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.935 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.193 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.194 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.194 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.194 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.194 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.452 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.453 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.711 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.711 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.711 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.711 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.711 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.711 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.711 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.711 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.711 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.970 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.228 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.228 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.228 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.228 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.228 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.228 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.228 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.228 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.487 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.487 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.487 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.488 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.747 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.747 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.747 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.748 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.748 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.748 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.748 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.748 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.748 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.008 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.008 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.008 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.008 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.008 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.008 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.008 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.008 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.268 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.269 rmmod nvme_tcp 00:06:57.269 rmmod nvme_fabrics 00:06:57.269 rmmod nvme_keyring 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1518157 ']' 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1518157 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1518157 ']' 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1518157 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.269 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1518157 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1518157' 00:06:57.529 killing process with pid 1518157 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1518157 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1518157 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.529 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.068 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.068 00:07:00.068 real 0m48.675s 00:07:00.068 user 3m17.415s 00:07:00.068 sys 0m16.045s 00:07:00.068 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.068 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.068 ************************************ 00:07:00.068 END TEST nvmf_ns_hotplug_stress 00:07:00.068 ************************************ 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.068 ************************************ 00:07:00.068 START TEST nvmf_delete_subsystem 00:07:00.068 ************************************ 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:00.068 * Looking for test storage... 00:07:00.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.068 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.068 --rc genhtml_branch_coverage=1 00:07:00.069 --rc genhtml_function_coverage=1 00:07:00.069 --rc genhtml_legend=1 00:07:00.069 --rc geninfo_all_blocks=1 00:07:00.069 --rc geninfo_unexecuted_blocks=1 00:07:00.069 00:07:00.069 ' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.069 --rc genhtml_branch_coverage=1 00:07:00.069 --rc genhtml_function_coverage=1 00:07:00.069 --rc genhtml_legend=1 00:07:00.069 --rc geninfo_all_blocks=1 00:07:00.069 --rc geninfo_unexecuted_blocks=1 00:07:00.069 00:07:00.069 ' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.069 --rc genhtml_branch_coverage=1 00:07:00.069 --rc genhtml_function_coverage=1 00:07:00.069 --rc genhtml_legend=1 00:07:00.069 --rc geninfo_all_blocks=1 00:07:00.069 --rc geninfo_unexecuted_blocks=1 00:07:00.069 00:07:00.069 ' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.069 --rc genhtml_branch_coverage=1 00:07:00.069 --rc genhtml_function_coverage=1 00:07:00.069 --rc genhtml_legend=1 00:07:00.069 --rc geninfo_all_blocks=1 00:07:00.069 --rc geninfo_unexecuted_blocks=1 00:07:00.069 00:07:00.069 ' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.069 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.642 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:06.643 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:06.643 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.643 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:06.643 Found net devices under 0000:86:00.0: cvl_0_0 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:06.643 Found net devices under 0000:86:00.1: cvl_0_1 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.643 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:07:06.643 00:07:06.643 --- 10.0.0.2 ping statistics --- 00:07:06.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.644 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:07:06.644 00:07:06.644 --- 10.0.0.1 ping statistics --- 00:07:06.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.644 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1528850 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1528850 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1528850 ']' 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.644 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 [2024-11-19 10:34:13.363066] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:06.644 [2024-11-19 10:34:13.363117] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.644 [2024-11-19 10:34:13.443976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.644 [2024-11-19 10:34:13.486704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.644 [2024-11-19 10:34:13.486740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.644 [2024-11-19 10:34:13.486746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.644 [2024-11-19 10:34:13.486753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.644 [2024-11-19 10:34:13.486758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.644 [2024-11-19 10:34:13.487901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.644 [2024-11-19 10:34:13.487903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.904 [2024-11-19 10:34:14.234356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.904 [2024-11-19 10:34:14.250504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.904 NULL1 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.904 Delay0 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1528886 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:06.904 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:06.904 [2024-11-19 10:34:14.345191] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:09.440 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:09.440 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.440 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 starting I/O failed: -6 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 [2024-11-19 10:34:16.424866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5680 is same with the state(6) to be set 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Write completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.440 Read completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 starting I/O failed: -6 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 [2024-11-19 10:34:16.425495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1408000c40 is same with the state(6) to be set 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Write completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:09.441 Read completed with error (sct=0, sc=8) 00:07:10.009 [2024-11-19 10:34:17.398368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d69a0 is same with the state(6) to be set 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 [2024-11-19 10:34:17.428768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f140800d020 is same with the state(6) to be set 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 [2024-11-19 10:34:17.428898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f140800d800 is same with the state(6) to be set 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.009 Write completed with error (sct=0, sc=8) 00:07:10.009 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Write completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 [2024-11-19 10:34:17.429387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d54a0 is same with the state(6) to be set 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Write completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Write completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Write completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Write completed with error (sct=0, sc=8) 00:07:10.010 Write completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Write completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 Write completed with error (sct=0, sc=8) 00:07:10.010 Read completed with error (sct=0, sc=8) 00:07:10.010 [2024-11-19 10:34:17.429975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5860 is same with the state(6) to be set 00:07:10.010 Initializing NVMe Controllers 00:07:10.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:10.010 Controller IO queue size 128, less than required. 00:07:10.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:10.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:10.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:10.010 Initialization complete. Launching workers. 00:07:10.010 ======================================================== 00:07:10.010 Latency(us) 00:07:10.010 Device Information : IOPS MiB/s Average min max 00:07:10.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.41 0.08 900899.94 294.25 1010575.96 00:07:10.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.44 0.08 931275.77 264.57 2003530.27 00:07:10.010 ======================================================== 00:07:10.010 Total : 329.85 0.16 915859.12 264.57 2003530.27 00:07:10.010 00:07:10.010 [2024-11-19 10:34:17.430291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d69a0 (9): Bad file descriptor 00:07:10.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:10.010 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.010 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:10.010 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1528886 00:07:10.010 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1528886 00:07:10.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1528886) - No such process 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1528886 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1528886 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1528886 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.577 [2024-11-19 10:34:17.960746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1529576 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1529576 00:07:10.577 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.838 [2024-11-19 10:34:18.047318] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:11.098 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.098 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1529576 00:07:11.098 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.666 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.666 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1529576 00:07:11.666 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.234 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.234 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1529576 00:07:12.234 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.802 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.802 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1529576 00:07:12.802 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.072 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.072 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1529576 00:07:13.072 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.639 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.639 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1529576 00:07:13.639 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.898 Initializing NVMe Controllers 00:07:13.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:13.898 Controller IO queue size 128, less than required. 00:07:13.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:13.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:13.898 Initialization complete. Launching workers. 00:07:13.898 ======================================================== 00:07:13.898 Latency(us) 00:07:13.898 Device Information : IOPS MiB/s Average min max 00:07:13.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002445.34 1000138.07 1008568.90 00:07:13.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003195.51 1000193.93 1009992.49 00:07:13.898 ======================================================== 00:07:13.898 Total : 256.00 0.12 1002820.42 1000138.07 1009992.49 00:07:13.898 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1529576 00:07:14.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1529576) - No such process 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1529576 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:14.157 rmmod nvme_tcp 00:07:14.157 rmmod nvme_fabrics 00:07:14.157 rmmod nvme_keyring 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1528850 ']' 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1528850 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1528850 ']' 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1528850 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.157 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1528850 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1528850' 00:07:14.417 killing process with pid 1528850 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1528850 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1528850 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.417 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.957 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:16.957 00:07:16.957 real 0m16.802s 00:07:16.957 user 0m30.453s 00:07:16.957 sys 0m5.602s 00:07:16.957 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.957 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.957 ************************************ 00:07:16.957 END TEST nvmf_delete_subsystem 00:07:16.957 ************************************ 00:07:16.957 10:34:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:16.957 10:34:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.957 10:34:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.957 10:34:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.957 ************************************ 00:07:16.957 START TEST nvmf_host_management 00:07:16.957 ************************************ 00:07:16.957 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:16.957 * Looking for test storage... 00:07:16.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.957 --rc genhtml_branch_coverage=1 00:07:16.957 --rc genhtml_function_coverage=1 00:07:16.957 --rc genhtml_legend=1 00:07:16.957 --rc geninfo_all_blocks=1 00:07:16.957 --rc geninfo_unexecuted_blocks=1 00:07:16.957 00:07:16.957 ' 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.957 --rc genhtml_branch_coverage=1 00:07:16.957 --rc genhtml_function_coverage=1 00:07:16.957 --rc genhtml_legend=1 00:07:16.957 --rc geninfo_all_blocks=1 00:07:16.957 --rc geninfo_unexecuted_blocks=1 00:07:16.957 00:07:16.957 ' 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.957 --rc genhtml_branch_coverage=1 00:07:16.957 --rc genhtml_function_coverage=1 00:07:16.957 --rc genhtml_legend=1 00:07:16.957 --rc geninfo_all_blocks=1 00:07:16.957 --rc geninfo_unexecuted_blocks=1 00:07:16.957 00:07:16.957 ' 00:07:16.957 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.958 --rc genhtml_branch_coverage=1 00:07:16.958 --rc genhtml_function_coverage=1 00:07:16.958 --rc genhtml_legend=1 00:07:16.958 --rc geninfo_all_blocks=1 00:07:16.958 --rc geninfo_unexecuted_blocks=1 00:07:16.958 00:07:16.958 ' 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:16.958 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:23.534 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:23.534 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.534 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:23.535 Found net devices under 0000:86:00.0: cvl_0_0 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:23.535 Found net devices under 0000:86:00.1: cvl_0_1 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.535 10:34:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:07:23.535 00:07:23.535 --- 10.0.0.2 ping statistics --- 00:07:23.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.535 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:23.535 00:07:23.535 --- 10.0.0.1 ping statistics --- 00:07:23.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.535 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1533812 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1533812 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1533812 ']' 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.535 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.535 [2024-11-19 10:34:30.222042] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:23.535 [2024-11-19 10:34:30.222096] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.535 [2024-11-19 10:34:30.299959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.535 [2024-11-19 10:34:30.344222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.535 [2024-11-19 10:34:30.344262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.535 [2024-11-19 10:34:30.344270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.535 [2024-11-19 10:34:30.344276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.535 [2024-11-19 10:34:30.344282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.535 [2024-11-19 10:34:30.345806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.535 [2024-11-19 10:34:30.345915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.535 [2024-11-19 10:34:30.345933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.536 [2024-11-19 10:34:30.345932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.536 [2024-11-19 10:34:30.487266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.536 Malloc0 00:07:23.536 [2024-11-19 10:34:30.560732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1533856 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1533856 /var/tmp/bdevperf.sock 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1533856 ']' 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:23.536 { 00:07:23.536 "params": { 00:07:23.536 "name": "Nvme$subsystem", 00:07:23.536 "trtype": "$TEST_TRANSPORT", 00:07:23.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.536 "adrfam": "ipv4", 00:07:23.536 "trsvcid": "$NVMF_PORT", 00:07:23.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.536 "hdgst": ${hdgst:-false}, 00:07:23.536 "ddgst": ${ddgst:-false} 00:07:23.536 }, 00:07:23.536 "method": "bdev_nvme_attach_controller" 00:07:23.536 } 00:07:23.536 EOF 00:07:23.536 )") 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:23.536 10:34:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:23.536 "params": { 00:07:23.536 "name": "Nvme0", 00:07:23.536 "trtype": "tcp", 00:07:23.536 "traddr": "10.0.0.2", 00:07:23.536 "adrfam": "ipv4", 00:07:23.536 "trsvcid": "4420", 00:07:23.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:23.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:23.536 "hdgst": false, 00:07:23.536 "ddgst": false 00:07:23.536 }, 00:07:23.536 "method": "bdev_nvme_attach_controller" 00:07:23.536 }' 00:07:23.536 [2024-11-19 10:34:30.656412] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:23.536 [2024-11-19 10:34:30.656454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533856 ] 00:07:23.536 [2024-11-19 10:34:30.732705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.536 [2024-11-19 10:34:30.774225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.799 Running I/O for 10 seconds... 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=101 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 101 -ge 100 ']' 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.799 [2024-11-19 10:34:31.202796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.202996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.203003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.203009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.203015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 [2024-11-19 10:34:31.203020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60200 is same with the state(6) to be set 00:07:23.799 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.799 [2024-11-19 10:34:31.207904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.799 [2024-11-19 10:34:31.207935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.799 [2024-11-19 10:34:31.207958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.799 [2024-11-19 10:34:31.207966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.799 [2024-11-19 10:34:31.207975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.799 [2024-11-19 10:34:31.207982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.799 [2024-11-19 10:34:31.207990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.799 [2024-11-19 10:34:31.207997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.799 [2024-11-19 10:34:31.208008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.799 [2024-11-19 10:34:31.208016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.799 [2024-11-19 10:34:31.208024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.799 [2024-11-19 10:34:31.208032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.799 [2024-11-19 10:34:31.208040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:23.800 [2024-11-19 10:34:31.208082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.800 [2024-11-19 10:34:31.208406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.800 [2024-11-19 10:34:31.208513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.800 [2024-11-19 10:34:31.208521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.801 [2024-11-19 10:34:31.208693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.801 [2024-11-19 10:34:31.208927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.801 [2024-11-19 10:34:31.208957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:07:23.801 [2024-11-19 10:34:31.209902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:23.801 task offset: 24576 on job bdev=Nvme0n1 fails 00:07:23.801 00:07:23.801 Latency(us) 00:07:23.801 [2024-11-19T09:34:31.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.802 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:23.802 Job: Nvme0n1 ended in about 0.11 seconds with error 00:07:23.802 Verification LBA range: start 0x0 length 0x400 00:07:23.802 Nvme0n1 : 0.11 1737.71 108.61 579.24 0.00 25434.42 1624.15 27240.18 00:07:23.802 [2024-11-19T09:34:31.251Z] =================================================================================================================== 00:07:23.802 [2024-11-19T09:34:31.251Z] Total : 1737.71 108.61 579.24 0.00 25434.42 1624.15 27240.18 00:07:23.802 [2024-11-19 10:34:31.212322] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.802 [2024-11-19 10:34:31.212344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244b500 (9): Bad file descriptor 00:07:23.802 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.802 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:23.802 [2024-11-19 10:34:31.224714] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1533856 00:07:25.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1533856) - No such process 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.181 { 00:07:25.181 "params": { 00:07:25.181 "name": "Nvme$subsystem", 00:07:25.181 "trtype": "$TEST_TRANSPORT", 00:07:25.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.181 "adrfam": "ipv4", 00:07:25.181 "trsvcid": "$NVMF_PORT", 00:07:25.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.181 "hdgst": ${hdgst:-false}, 00:07:25.181 "ddgst": ${ddgst:-false} 00:07:25.181 }, 00:07:25.181 "method": "bdev_nvme_attach_controller" 00:07:25.181 } 00:07:25.181 EOF 00:07:25.181 )") 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:25.181 10:34:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.181 "params": { 00:07:25.181 "name": "Nvme0", 00:07:25.181 "trtype": "tcp", 00:07:25.181 "traddr": "10.0.0.2", 00:07:25.181 "adrfam": "ipv4", 00:07:25.181 "trsvcid": "4420", 00:07:25.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:25.181 "hdgst": false, 00:07:25.181 "ddgst": false 00:07:25.181 }, 00:07:25.181 "method": "bdev_nvme_attach_controller" 00:07:25.181 }' 00:07:25.181 [2024-11-19 10:34:32.275929] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:25.181 [2024-11-19 10:34:32.275983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534110 ] 00:07:25.181 [2024-11-19 10:34:32.350300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.181 [2024-11-19 10:34:32.390735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.181 Running I/O for 1 seconds... 00:07:26.378 1984.00 IOPS, 124.00 MiB/s 00:07:26.378 Latency(us) 00:07:26.378 [2024-11-19T09:34:33.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.378 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:26.378 Verification LBA range: start 0x0 length 0x400 00:07:26.378 Nvme0n1 : 1.02 2007.88 125.49 0.00 0.00 31371.54 6097.70 27582.11 00:07:26.378 [2024-11-19T09:34:33.827Z] =================================================================================================================== 00:07:26.378 [2024-11-19T09:34:33.827Z] Total : 2007.88 125.49 0.00 0.00 31371.54 6097.70 27582.11 00:07:26.378 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.379 rmmod nvme_tcp 00:07:26.379 rmmod nvme_fabrics 00:07:26.379 rmmod nvme_keyring 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1533812 ']' 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1533812 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1533812 ']' 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1533812 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.379 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533812 00:07:26.638 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:26.638 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:26.638 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533812' 00:07:26.638 killing process with pid 1533812 00:07:26.638 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1533812 00:07:26.638 10:34:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1533812 00:07:26.638 [2024-11-19 10:34:34.014020] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.638 10:34:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:29.177 00:07:29.177 real 0m12.177s 00:07:29.177 user 0m18.437s 00:07:29.177 sys 0m5.538s 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.177 ************************************ 00:07:29.177 END TEST nvmf_host_management 00:07:29.177 ************************************ 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.177 ************************************ 00:07:29.177 START TEST nvmf_lvol 00:07:29.177 ************************************ 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:29.177 * Looking for test storage... 00:07:29.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.177 --rc genhtml_branch_coverage=1 00:07:29.177 --rc genhtml_function_coverage=1 00:07:29.177 --rc genhtml_legend=1 00:07:29.177 --rc geninfo_all_blocks=1 00:07:29.177 --rc geninfo_unexecuted_blocks=1 00:07:29.177 00:07:29.177 ' 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.177 --rc genhtml_branch_coverage=1 00:07:29.177 --rc genhtml_function_coverage=1 00:07:29.177 --rc genhtml_legend=1 00:07:29.177 --rc geninfo_all_blocks=1 00:07:29.177 --rc geninfo_unexecuted_blocks=1 00:07:29.177 00:07:29.177 ' 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.177 --rc genhtml_branch_coverage=1 00:07:29.177 --rc genhtml_function_coverage=1 00:07:29.177 --rc genhtml_legend=1 00:07:29.177 --rc geninfo_all_blocks=1 00:07:29.177 --rc geninfo_unexecuted_blocks=1 00:07:29.177 00:07:29.177 ' 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.177 --rc genhtml_branch_coverage=1 00:07:29.177 --rc genhtml_function_coverage=1 00:07:29.177 --rc genhtml_legend=1 00:07:29.177 --rc geninfo_all_blocks=1 00:07:29.177 --rc geninfo_unexecuted_blocks=1 00:07:29.177 00:07:29.177 ' 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.177 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.178 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.752 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.752 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.752 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.752 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:07:35.752 00:07:35.752 --- 10.0.0.2 ping statistics --- 00:07:35.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.752 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:35.752 00:07:35.752 --- 10.0.0.1 ping statistics --- 00:07:35.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.752 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1537943 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1537943 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1537943 ']' 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.752 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.753 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.753 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.753 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.753 [2024-11-19 10:34:42.452203] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:35.753 [2024-11-19 10:34:42.452248] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.753 [2024-11-19 10:34:42.532427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.753 [2024-11-19 10:34:42.576030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.753 [2024-11-19 10:34:42.576067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.753 [2024-11-19 10:34:42.576074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.753 [2024-11-19 10:34:42.576080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.753 [2024-11-19 10:34:42.576086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.753 [2024-11-19 10:34:42.577492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.753 [2024-11-19 10:34:42.577515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.753 [2024-11-19 10:34:42.577515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.012 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.012 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:36.012 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.012 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.012 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:36.012 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.012 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:36.271 [2024-11-19 10:34:43.504481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.271 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:36.530 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:36.530 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:36.789 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:36.789 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:36.789 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:37.049 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d319415d-fa01-4394-a6ad-298d8fe584c1 00:07:37.049 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d319415d-fa01-4394-a6ad-298d8fe584c1 lvol 20 00:07:37.307 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cf80c9f8-b4ae-40a1-844e-ef8ce9550824 00:07:37.307 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:37.567 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cf80c9f8-b4ae-40a1-844e-ef8ce9550824 00:07:37.567 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:37.826 [2024-11-19 10:34:45.177253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.826 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.085 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1538597 00:07:38.085 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:38.085 10:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:39.023 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cf80c9f8-b4ae-40a1-844e-ef8ce9550824 MY_SNAPSHOT 00:07:39.282 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=90495bf4-91c6-4bce-a8ee-d84c4e109413 00:07:39.282 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cf80c9f8-b4ae-40a1-844e-ef8ce9550824 30 00:07:39.541 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 90495bf4-91c6-4bce-a8ee-d84c4e109413 MY_CLONE 00:07:39.799 10:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7bed3d3a-8e5e-4cee-9d43-6581a64b9c34 00:07:39.799 10:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7bed3d3a-8e5e-4cee-9d43-6581a64b9c34 00:07:40.367 10:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1538597 00:07:48.491 Initializing NVMe Controllers 00:07:48.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:48.491 Controller IO queue size 128, less than required. 00:07:48.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:48.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:48.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:48.491 Initialization complete. Launching workers. 00:07:48.491 ======================================================== 00:07:48.491 Latency(us) 00:07:48.491 Device Information : IOPS MiB/s Average min max 00:07:48.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11907.57 46.51 10751.09 1534.20 61932.32 00:07:48.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11845.57 46.27 10810.62 3334.91 60967.11 00:07:48.491 ======================================================== 00:07:48.491 Total : 23753.14 92.79 10780.78 1534.20 61932.32 00:07:48.491 00:07:48.491 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:48.751 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf80c9f8-b4ae-40a1-844e-ef8ce9550824 00:07:48.751 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d319415d-fa01-4394-a6ad-298d8fe584c1 00:07:49.010 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:49.010 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:49.010 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:49.010 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:49.010 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:49.010 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:49.011 rmmod nvme_tcp 00:07:49.011 rmmod nvme_fabrics 00:07:49.011 rmmod nvme_keyring 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1537943 ']' 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1537943 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1537943 ']' 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1537943 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:49.011 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537943 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537943' 00:07:49.270 killing process with pid 1537943 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1537943 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1537943 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:49.270 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:49.530 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.530 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:49.530 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.530 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.530 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.439 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.439 00:07:51.439 real 0m22.605s 00:07:51.439 user 1m5.113s 00:07:51.439 sys 0m7.823s 00:07:51.439 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.439 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.439 ************************************ 00:07:51.439 END TEST nvmf_lvol 00:07:51.439 ************************************ 00:07:51.439 10:34:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:51.439 10:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.439 10:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.439 10:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.439 ************************************ 00:07:51.439 START TEST nvmf_lvs_grow 00:07:51.439 ************************************ 00:07:51.439 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:51.700 * Looking for test storage... 00:07:51.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.700 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.700 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.700 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.700 --rc genhtml_branch_coverage=1 00:07:51.700 --rc genhtml_function_coverage=1 00:07:51.700 --rc genhtml_legend=1 00:07:51.700 --rc geninfo_all_blocks=1 00:07:51.700 --rc geninfo_unexecuted_blocks=1 00:07:51.700 00:07:51.700 ' 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.700 --rc genhtml_branch_coverage=1 00:07:51.700 --rc genhtml_function_coverage=1 00:07:51.700 --rc genhtml_legend=1 00:07:51.700 --rc geninfo_all_blocks=1 00:07:51.700 --rc geninfo_unexecuted_blocks=1 00:07:51.700 00:07:51.700 ' 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.700 --rc genhtml_branch_coverage=1 00:07:51.700 --rc genhtml_function_coverage=1 00:07:51.700 --rc genhtml_legend=1 00:07:51.700 --rc geninfo_all_blocks=1 00:07:51.700 --rc geninfo_unexecuted_blocks=1 00:07:51.700 00:07:51.700 ' 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.700 --rc genhtml_branch_coverage=1 00:07:51.700 --rc genhtml_function_coverage=1 00:07:51.700 --rc genhtml_legend=1 00:07:51.700 --rc geninfo_all_blocks=1 00:07:51.700 --rc geninfo_unexecuted_blocks=1 00:07:51.700 00:07:51.700 ' 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.700 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.701 10:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:58.285 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:58.285 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:58.285 Found net devices under 0000:86:00.0: cvl_0_0 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:58.285 Found net devices under 0000:86:00.1: cvl_0_1 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:58.285 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:58.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:07:58.285 00:07:58.285 --- 10.0.0.2 ping statistics --- 00:07:58.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.285 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:07:58.285 00:07:58.285 --- 10.0.0.1 ping statistics --- 00:07:58.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.285 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.285 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1543978 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1543978 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1543978 ']' 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.286 [2024-11-19 10:35:05.148855] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:07:58.286 [2024-11-19 10:35:05.148897] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.286 [2024-11-19 10:35:05.228817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.286 [2024-11-19 10:35:05.270900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.286 [2024-11-19 10:35:05.270936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.286 [2024-11-19 10:35:05.270944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.286 [2024-11-19 10:35:05.270954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.286 [2024-11-19 10:35:05.270959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.286 [2024-11-19 10:35:05.271527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:58.286 [2024-11-19 10:35:05.571848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.286 ************************************ 00:07:58.286 START TEST lvs_grow_clean 00:07:58.286 ************************************ 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.286 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.545 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:58.545 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:58.804 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5587e9f8-5219-45a1-b7f6-d3ee81709454 00:07:58.804 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:58.805 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:07:59.064 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:59.064 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:59.064 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 lvol 150 00:07:59.064 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bc01966f-5f1a-4dd4-8f4b-444de1780784 00:07:59.064 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.064 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:59.324 [2024-11-19 10:35:06.630925] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:59.324 [2024-11-19 10:35:06.630985] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:59.324 true 00:07:59.324 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:07:59.324 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:59.584 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:59.584 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:59.584 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bc01966f-5f1a-4dd4-8f4b-444de1780784 00:07:59.843 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:00.103 [2024-11-19 10:35:07.377165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.103 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1544361 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1544361 /var/tmp/bdevperf.sock 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1544361 ']' 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.363 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:00.363 [2024-11-19 10:35:07.596988] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:00.363 [2024-11-19 10:35:07.597034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544361 ] 00:08:00.363 [2024-11-19 10:35:07.673442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.363 [2024-11-19 10:35:07.716556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.623 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.623 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:00.623 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:00.882 Nvme0n1 00:08:00.882 10:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:00.882 [ 00:08:00.882 { 00:08:00.882 "name": "Nvme0n1", 00:08:00.882 "aliases": [ 00:08:00.882 "bc01966f-5f1a-4dd4-8f4b-444de1780784" 00:08:00.882 ], 00:08:00.882 "product_name": "NVMe disk", 00:08:00.882 "block_size": 4096, 00:08:00.882 "num_blocks": 38912, 00:08:00.882 "uuid": "bc01966f-5f1a-4dd4-8f4b-444de1780784", 00:08:00.882 "numa_id": 1, 00:08:00.882 "assigned_rate_limits": { 00:08:00.882 "rw_ios_per_sec": 0, 00:08:00.882 "rw_mbytes_per_sec": 0, 00:08:00.882 "r_mbytes_per_sec": 0, 00:08:00.882 "w_mbytes_per_sec": 0 00:08:00.882 }, 00:08:00.882 "claimed": false, 00:08:00.882 "zoned": false, 00:08:00.882 "supported_io_types": { 00:08:00.882 "read": true, 00:08:00.882 "write": true, 00:08:00.882 "unmap": true, 00:08:00.882 "flush": true, 00:08:00.882 "reset": true, 00:08:00.882 "nvme_admin": true, 00:08:00.882 "nvme_io": true, 00:08:00.882 "nvme_io_md": false, 00:08:00.882 "write_zeroes": true, 00:08:00.882 "zcopy": false, 00:08:00.882 "get_zone_info": false, 00:08:00.882 "zone_management": false, 00:08:00.882 "zone_append": false, 00:08:00.882 "compare": true, 00:08:00.882 "compare_and_write": true, 00:08:00.882 "abort": true, 00:08:00.882 "seek_hole": false, 00:08:00.882 "seek_data": false, 00:08:00.882 "copy": true, 00:08:00.883 "nvme_iov_md": false 00:08:00.883 }, 00:08:00.883 "memory_domains": [ 00:08:00.883 { 00:08:00.883 "dma_device_id": "system", 00:08:00.883 "dma_device_type": 1 00:08:00.883 } 00:08:00.883 ], 00:08:00.883 "driver_specific": { 00:08:00.883 "nvme": [ 00:08:00.883 { 00:08:00.883 "trid": { 00:08:00.883 "trtype": "TCP", 00:08:00.883 "adrfam": "IPv4", 00:08:00.883 "traddr": "10.0.0.2", 00:08:00.883 "trsvcid": "4420", 00:08:00.883 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:00.883 }, 00:08:00.883 "ctrlr_data": { 00:08:00.883 "cntlid": 1, 00:08:00.883 "vendor_id": "0x8086", 00:08:00.883 "model_number": "SPDK bdev Controller", 00:08:00.883 "serial_number": "SPDK0", 00:08:00.883 "firmware_revision": "25.01", 00:08:00.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.883 "oacs": { 00:08:00.883 "security": 0, 00:08:00.883 "format": 0, 00:08:00.883 "firmware": 0, 00:08:00.883 "ns_manage": 0 00:08:00.883 }, 00:08:00.883 "multi_ctrlr": true, 00:08:00.883 "ana_reporting": false 00:08:00.883 }, 00:08:00.883 "vs": { 00:08:00.883 "nvme_version": "1.3" 00:08:00.883 }, 00:08:00.883 "ns_data": { 00:08:00.883 "id": 1, 00:08:00.883 "can_share": true 00:08:00.883 } 00:08:00.883 } 00:08:00.883 ], 00:08:00.883 "mp_policy": "active_passive" 00:08:00.883 } 00:08:00.883 } 00:08:00.883 ] 00:08:00.883 10:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1544489 00:08:00.883 10:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:00.883 10:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:01.142 Running I/O for 10 seconds... 00:08:02.080 Latency(us) 00:08:02.080 [2024-11-19T09:35:09.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.080 Nvme0n1 : 1.00 22678.00 88.59 0.00 0.00 0.00 0.00 0.00 00:08:02.080 [2024-11-19T09:35:09.529Z] =================================================================================================================== 00:08:02.080 [2024-11-19T09:35:09.529Z] Total : 22678.00 88.59 0.00 0.00 0.00 0.00 0.00 00:08:02.080 00:08:03.030 10:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:03.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.030 Nvme0n1 : 2.00 22642.00 88.45 0.00 0.00 0.00 0.00 0.00 00:08:03.030 [2024-11-19T09:35:10.479Z] =================================================================================================================== 00:08:03.030 [2024-11-19T09:35:10.479Z] Total : 22642.00 88.45 0.00 0.00 0.00 0.00 0.00 00:08:03.030 00:08:03.348 true 00:08:03.348 10:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:03.348 10:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:03.348 10:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:03.348 10:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:03.348 10:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1544489 00:08:03.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.998 Nvme0n1 : 3.00 22705.33 88.69 0.00 0.00 0.00 0.00 0.00 00:08:03.998 [2024-11-19T09:35:11.448Z] =================================================================================================================== 00:08:03.999 [2024-11-19T09:35:11.448Z] Total : 22705.33 88.69 0.00 0.00 0.00 0.00 0.00 00:08:03.999 00:08:05.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.373 Nvme0n1 : 4.00 22754.00 88.88 0.00 0.00 0.00 0.00 0.00 00:08:05.373 [2024-11-19T09:35:12.822Z] =================================================================================================================== 00:08:05.373 [2024-11-19T09:35:12.822Z] Total : 22754.00 88.88 0.00 0.00 0.00 0.00 0.00 00:08:05.373 00:08:06.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.310 Nvme0n1 : 5.00 22818.40 89.13 0.00 0.00 0.00 0.00 0.00 00:08:06.310 [2024-11-19T09:35:13.759Z] =================================================================================================================== 00:08:06.310 [2024-11-19T09:35:13.759Z] Total : 22818.40 89.13 0.00 0.00 0.00 0.00 0.00 00:08:06.310 00:08:07.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.246 Nvme0n1 : 6.00 22879.33 89.37 0.00 0.00 0.00 0.00 0.00 00:08:07.246 [2024-11-19T09:35:14.695Z] =================================================================================================================== 00:08:07.246 [2024-11-19T09:35:14.695Z] Total : 22879.33 89.37 0.00 0.00 0.00 0.00 0.00 00:08:07.246 00:08:08.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.182 Nvme0n1 : 7.00 22913.71 89.51 0.00 0.00 0.00 0.00 0.00 00:08:08.182 [2024-11-19T09:35:15.631Z] =================================================================================================================== 00:08:08.182 [2024-11-19T09:35:15.631Z] Total : 22913.71 89.51 0.00 0.00 0.00 0.00 0.00 00:08:08.182 00:08:09.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.117 Nvme0n1 : 8.00 22940.50 89.61 0.00 0.00 0.00 0.00 0.00 00:08:09.117 [2024-11-19T09:35:16.566Z] =================================================================================================================== 00:08:09.117 [2024-11-19T09:35:16.566Z] Total : 22940.50 89.61 0.00 0.00 0.00 0.00 0.00 00:08:09.117 00:08:10.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.054 Nvme0n1 : 9.00 22967.22 89.72 0.00 0.00 0.00 0.00 0.00 00:08:10.054 [2024-11-19T09:35:17.503Z] =================================================================================================================== 00:08:10.054 [2024-11-19T09:35:17.503Z] Total : 22967.22 89.72 0.00 0.00 0.00 0.00 0.00 00:08:10.054 00:08:10.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.991 Nvme0n1 : 10.00 22988.50 89.80 0.00 0.00 0.00 0.00 0.00 00:08:10.991 [2024-11-19T09:35:18.440Z] =================================================================================================================== 00:08:10.991 [2024-11-19T09:35:18.440Z] Total : 22988.50 89.80 0.00 0.00 0.00 0.00 0.00 00:08:10.991 00:08:10.991 00:08:10.991 Latency(us) 00:08:10.991 [2024-11-19T09:35:18.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.991 Nvme0n1 : 10.00 22988.12 89.80 0.00 0.00 5564.92 1403.33 11739.49 00:08:10.991 [2024-11-19T09:35:18.440Z] =================================================================================================================== 00:08:10.991 [2024-11-19T09:35:18.440Z] Total : 22988.12 89.80 0.00 0.00 5564.92 1403.33 11739.49 00:08:10.991 { 00:08:10.991 "results": [ 00:08:10.991 { 00:08:10.991 "job": "Nvme0n1", 00:08:10.991 "core_mask": "0x2", 00:08:10.991 "workload": "randwrite", 00:08:10.991 "status": "finished", 00:08:10.992 "queue_depth": 128, 00:08:10.992 "io_size": 4096, 00:08:10.992 "runtime": 10.002951, 00:08:10.992 "iops": 22988.116206907343, 00:08:10.992 "mibps": 89.79732893323181, 00:08:10.992 "io_failed": 0, 00:08:10.992 "io_timeout": 0, 00:08:10.992 "avg_latency_us": 5564.922018920263, 00:08:10.992 "min_latency_us": 1403.3252173913042, 00:08:10.992 "max_latency_us": 11739.492173913044 00:08:10.992 } 00:08:10.992 ], 00:08:10.992 "core_count": 1 00:08:10.992 } 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1544361 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1544361 ']' 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1544361 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1544361 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1544361' 00:08:11.250 killing process with pid 1544361 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1544361 00:08:11.250 Received shutdown signal, test time was about 10.000000 seconds 00:08:11.250 00:08:11.250 Latency(us) 00:08:11.250 [2024-11-19T09:35:18.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.250 [2024-11-19T09:35:18.699Z] =================================================================================================================== 00:08:11.250 [2024-11-19T09:35:18.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1544361 00:08:11.250 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.509 10:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:11.768 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:11.768 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.026 [2024-11-19 10:35:19.414620] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:12.026 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:12.285 request: 00:08:12.285 { 00:08:12.285 "uuid": "5587e9f8-5219-45a1-b7f6-d3ee81709454", 00:08:12.285 "method": "bdev_lvol_get_lvstores", 00:08:12.285 "req_id": 1 00:08:12.285 } 00:08:12.285 Got JSON-RPC error response 00:08:12.285 response: 00:08:12.285 { 00:08:12.285 "code": -19, 00:08:12.285 "message": "No such device" 00:08:12.285 } 00:08:12.285 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:12.285 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:12.285 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:12.285 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:12.285 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.544 aio_bdev 00:08:12.544 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bc01966f-5f1a-4dd4-8f4b-444de1780784 00:08:12.544 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=bc01966f-5f1a-4dd4-8f4b-444de1780784 00:08:12.544 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.544 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:12.544 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.544 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.544 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:12.803 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bc01966f-5f1a-4dd4-8f4b-444de1780784 -t 2000 00:08:12.803 [ 00:08:12.803 { 00:08:12.803 "name": "bc01966f-5f1a-4dd4-8f4b-444de1780784", 00:08:12.803 "aliases": [ 00:08:12.803 "lvs/lvol" 00:08:12.803 ], 00:08:12.803 "product_name": "Logical Volume", 00:08:12.803 "block_size": 4096, 00:08:12.803 "num_blocks": 38912, 00:08:12.803 "uuid": "bc01966f-5f1a-4dd4-8f4b-444de1780784", 00:08:12.803 "assigned_rate_limits": { 00:08:12.803 "rw_ios_per_sec": 0, 00:08:12.803 "rw_mbytes_per_sec": 0, 00:08:12.803 "r_mbytes_per_sec": 0, 00:08:12.803 "w_mbytes_per_sec": 0 00:08:12.803 }, 00:08:12.803 "claimed": false, 00:08:12.803 "zoned": false, 00:08:12.803 "supported_io_types": { 00:08:12.803 "read": true, 00:08:12.803 "write": true, 00:08:12.803 "unmap": true, 00:08:12.803 "flush": false, 00:08:12.803 "reset": true, 00:08:12.803 "nvme_admin": false, 00:08:12.803 "nvme_io": false, 00:08:12.803 "nvme_io_md": false, 00:08:12.803 "write_zeroes": true, 00:08:12.803 "zcopy": false, 00:08:12.803 "get_zone_info": false, 00:08:12.803 "zone_management": false, 00:08:12.803 "zone_append": false, 00:08:12.803 "compare": false, 00:08:12.803 "compare_and_write": false, 00:08:12.803 "abort": false, 00:08:12.803 "seek_hole": true, 00:08:12.803 "seek_data": true, 00:08:12.803 "copy": false, 00:08:12.803 "nvme_iov_md": false 00:08:12.803 }, 00:08:12.803 "driver_specific": { 00:08:12.803 "lvol": { 00:08:12.803 "lvol_store_uuid": "5587e9f8-5219-45a1-b7f6-d3ee81709454", 00:08:12.803 "base_bdev": "aio_bdev", 00:08:12.803 "thin_provision": false, 00:08:12.803 "num_allocated_clusters": 38, 00:08:12.803 "snapshot": false, 00:08:12.803 "clone": false, 00:08:12.803 "esnap_clone": false 00:08:12.803 } 00:08:12.803 } 00:08:12.803 } 00:08:12.803 ] 00:08:12.803 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:12.803 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:12.803 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:13.062 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:13.062 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:13.062 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:13.321 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:13.321 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bc01966f-5f1a-4dd4-8f4b-444de1780784 00:08:13.580 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5587e9f8-5219-45a1-b7f6-d3ee81709454 00:08:13.839 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.839 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.839 00:08:13.839 real 0m15.630s 00:08:13.839 user 0m15.224s 00:08:13.839 sys 0m1.458s 00:08:13.839 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.839 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.839 ************************************ 00:08:13.839 END TEST lvs_grow_clean 00:08:13.839 ************************************ 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:14.098 ************************************ 00:08:14.098 START TEST lvs_grow_dirty 00:08:14.098 ************************************ 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.098 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:14.357 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:14.357 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:14.357 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:14.357 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:14.358 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:14.617 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:14.617 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:14.617 10:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 lvol 150 00:08:14.876 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3c8cc7bb-c738-4afc-8fc2-a542a2bacafc 00:08:14.876 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.876 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:15.135 [2024-11-19 10:35:22.345930] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:15.135 [2024-11-19 10:35:22.345985] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:15.135 true 00:08:15.135 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:15.135 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:15.135 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:15.135 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.394 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3c8cc7bb-c738-4afc-8fc2-a542a2bacafc 00:08:15.654 10:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.654 [2024-11-19 10:35:23.096203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1547093 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1547093 /var/tmp/bdevperf.sock 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1547093 ']' 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:15.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.913 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.913 [2024-11-19 10:35:23.328366] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:15.913 [2024-11-19 10:35:23.328412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547093 ] 00:08:16.172 [2024-11-19 10:35:23.404320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.172 [2024-11-19 10:35:23.445016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.172 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.172 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:16.172 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:16.741 Nvme0n1 00:08:16.741 10:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:16.741 [ 00:08:16.741 { 00:08:16.741 "name": "Nvme0n1", 00:08:16.741 "aliases": [ 00:08:16.741 "3c8cc7bb-c738-4afc-8fc2-a542a2bacafc" 00:08:16.741 ], 00:08:16.741 "product_name": "NVMe disk", 00:08:16.741 "block_size": 4096, 00:08:16.741 "num_blocks": 38912, 00:08:16.741 "uuid": "3c8cc7bb-c738-4afc-8fc2-a542a2bacafc", 00:08:16.741 "numa_id": 1, 00:08:16.741 "assigned_rate_limits": { 00:08:16.741 "rw_ios_per_sec": 0, 00:08:16.741 "rw_mbytes_per_sec": 0, 00:08:16.741 "r_mbytes_per_sec": 0, 00:08:16.741 "w_mbytes_per_sec": 0 00:08:16.741 }, 00:08:16.741 "claimed": false, 00:08:16.741 "zoned": false, 00:08:16.741 "supported_io_types": { 00:08:16.741 "read": true, 00:08:16.741 "write": true, 00:08:16.741 "unmap": true, 00:08:16.741 "flush": true, 00:08:16.741 "reset": true, 00:08:16.741 "nvme_admin": true, 00:08:16.741 "nvme_io": true, 00:08:16.741 "nvme_io_md": false, 00:08:16.741 "write_zeroes": true, 00:08:16.741 "zcopy": false, 00:08:16.741 "get_zone_info": false, 00:08:16.741 "zone_management": false, 00:08:16.741 "zone_append": false, 00:08:16.741 "compare": true, 00:08:16.741 "compare_and_write": true, 00:08:16.741 "abort": true, 00:08:16.741 "seek_hole": false, 00:08:16.741 "seek_data": false, 00:08:16.741 "copy": true, 00:08:16.741 "nvme_iov_md": false 00:08:16.741 }, 00:08:16.741 "memory_domains": [ 00:08:16.741 { 00:08:16.741 "dma_device_id": "system", 00:08:16.741 "dma_device_type": 1 00:08:16.741 } 00:08:16.741 ], 00:08:16.741 "driver_specific": { 00:08:16.741 "nvme": [ 00:08:16.741 { 00:08:16.741 "trid": { 00:08:16.741 "trtype": "TCP", 00:08:16.741 "adrfam": "IPv4", 00:08:16.741 "traddr": "10.0.0.2", 00:08:16.741 "trsvcid": "4420", 00:08:16.741 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:16.741 }, 00:08:16.741 "ctrlr_data": { 00:08:16.741 "cntlid": 1, 00:08:16.741 "vendor_id": "0x8086", 00:08:16.741 "model_number": "SPDK bdev Controller", 00:08:16.741 "serial_number": "SPDK0", 00:08:16.741 "firmware_revision": "25.01", 00:08:16.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.741 "oacs": { 00:08:16.741 "security": 0, 00:08:16.741 "format": 0, 00:08:16.741 "firmware": 0, 00:08:16.741 "ns_manage": 0 00:08:16.741 }, 00:08:16.741 "multi_ctrlr": true, 00:08:16.741 "ana_reporting": false 00:08:16.741 }, 00:08:16.741 "vs": { 00:08:16.741 "nvme_version": "1.3" 00:08:16.741 }, 00:08:16.741 "ns_data": { 00:08:16.741 "id": 1, 00:08:16.741 "can_share": true 00:08:16.741 } 00:08:16.741 } 00:08:16.741 ], 00:08:16.741 "mp_policy": "active_passive" 00:08:16.741 } 00:08:16.741 } 00:08:16.741 ] 00:08:16.741 10:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1547225 00:08:16.741 10:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:16.741 10:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.001 Running I/O for 10 seconds... 00:08:17.940 Latency(us) 00:08:17.940 [2024-11-19T09:35:25.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.940 Nvme0n1 : 1.00 22607.00 88.31 0.00 0.00 0.00 0.00 0.00 00:08:17.940 [2024-11-19T09:35:25.389Z] =================================================================================================================== 00:08:17.940 [2024-11-19T09:35:25.389Z] Total : 22607.00 88.31 0.00 0.00 0.00 0.00 0.00 00:08:17.940 00:08:18.876 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:18.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.877 Nvme0n1 : 2.00 22718.50 88.74 0.00 0.00 0.00 0.00 0.00 00:08:18.877 [2024-11-19T09:35:26.326Z] =================================================================================================================== 00:08:18.877 [2024-11-19T09:35:26.326Z] Total : 22718.50 88.74 0.00 0.00 0.00 0.00 0.00 00:08:18.877 00:08:19.135 true 00:08:19.135 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:19.135 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:19.394 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:19.394 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:19.394 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1547225 00:08:19.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.962 Nvme0n1 : 3.00 22774.33 88.96 0.00 0.00 0.00 0.00 0.00 00:08:19.962 [2024-11-19T09:35:27.411Z] =================================================================================================================== 00:08:19.962 [2024-11-19T09:35:27.411Z] Total : 22774.33 88.96 0.00 0.00 0.00 0.00 0.00 00:08:19.962 00:08:20.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.898 Nvme0n1 : 4.00 22817.00 89.13 0.00 0.00 0.00 0.00 0.00 00:08:20.898 [2024-11-19T09:35:28.347Z] =================================================================================================================== 00:08:20.898 [2024-11-19T09:35:28.347Z] Total : 22817.00 89.13 0.00 0.00 0.00 0.00 0.00 00:08:20.898 00:08:21.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.834 Nvme0n1 : 5.00 22868.80 89.33 0.00 0.00 0.00 0.00 0.00 00:08:21.834 [2024-11-19T09:35:29.283Z] =================================================================================================================== 00:08:21.834 [2024-11-19T09:35:29.283Z] Total : 22868.80 89.33 0.00 0.00 0.00 0.00 0.00 00:08:21.834 00:08:23.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.212 Nvme0n1 : 6.00 22867.33 89.33 0.00 0.00 0.00 0.00 0.00 00:08:23.212 [2024-11-19T09:35:30.661Z] =================================================================================================================== 00:08:23.212 [2024-11-19T09:35:30.661Z] Total : 22867.33 89.33 0.00 0.00 0.00 0.00 0.00 00:08:23.212 00:08:24.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.149 Nvme0n1 : 7.00 22859.29 89.29 0.00 0.00 0.00 0.00 0.00 00:08:24.149 [2024-11-19T09:35:31.598Z] =================================================================================================================== 00:08:24.149 [2024-11-19T09:35:31.598Z] Total : 22859.29 89.29 0.00 0.00 0.00 0.00 0.00 00:08:24.149 00:08:25.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.087 Nvme0n1 : 8.00 22890.38 89.42 0.00 0.00 0.00 0.00 0.00 00:08:25.087 [2024-11-19T09:35:32.536Z] =================================================================================================================== 00:08:25.087 [2024-11-19T09:35:32.536Z] Total : 22890.38 89.42 0.00 0.00 0.00 0.00 0.00 00:08:25.087 00:08:26.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.024 Nvme0n1 : 9.00 22922.44 89.54 0.00 0.00 0.00 0.00 0.00 00:08:26.024 [2024-11-19T09:35:33.473Z] =================================================================================================================== 00:08:26.024 [2024-11-19T09:35:33.473Z] Total : 22922.44 89.54 0.00 0.00 0.00 0.00 0.00 00:08:26.024 00:08:26.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.961 Nvme0n1 : 10.00 22936.40 89.60 0.00 0.00 0.00 0.00 0.00 00:08:26.961 [2024-11-19T09:35:34.410Z] =================================================================================================================== 00:08:26.961 [2024-11-19T09:35:34.410Z] Total : 22936.40 89.60 0.00 0.00 0.00 0.00 0.00 00:08:26.961 00:08:26.961 00:08:26.961 Latency(us) 00:08:26.961 [2024-11-19T09:35:34.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.961 Nvme0n1 : 10.00 22940.92 89.61 0.00 0.00 5576.77 3191.32 13620.09 00:08:26.961 [2024-11-19T09:35:34.410Z] =================================================================================================================== 00:08:26.961 [2024-11-19T09:35:34.410Z] Total : 22940.92 89.61 0.00 0.00 5576.77 3191.32 13620.09 00:08:26.961 { 00:08:26.961 "results": [ 00:08:26.961 { 00:08:26.961 "job": "Nvme0n1", 00:08:26.961 "core_mask": "0x2", 00:08:26.961 "workload": "randwrite", 00:08:26.961 "status": "finished", 00:08:26.961 "queue_depth": 128, 00:08:26.961 "io_size": 4096, 00:08:26.961 "runtime": 10.003608, 00:08:26.961 "iops": 22940.922915012263, 00:08:26.961 "mibps": 89.61298013676665, 00:08:26.961 "io_failed": 0, 00:08:26.961 "io_timeout": 0, 00:08:26.961 "avg_latency_us": 5576.767699152532, 00:08:26.961 "min_latency_us": 3191.318260869565, 00:08:26.961 "max_latency_us": 13620.090434782609 00:08:26.961 } 00:08:26.961 ], 00:08:26.961 "core_count": 1 00:08:26.961 } 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1547093 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1547093 ']' 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1547093 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1547093 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1547093' 00:08:26.961 killing process with pid 1547093 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1547093 00:08:26.961 Received shutdown signal, test time was about 10.000000 seconds 00:08:26.961 00:08:26.961 Latency(us) 00:08:26.961 [2024-11-19T09:35:34.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.961 [2024-11-19T09:35:34.410Z] =================================================================================================================== 00:08:26.961 [2024-11-19T09:35:34.410Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:26.961 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1547093 00:08:27.221 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.479 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:27.738 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:27.738 10:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1543978 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1543978 00:08:27.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1543978 Killed "${NVMF_APP[@]}" "$@" 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1549025 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1549025 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1549025 ']' 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.738 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.997 [2024-11-19 10:35:35.224359] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:27.997 [2024-11-19 10:35:35.224405] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.997 [2024-11-19 10:35:35.302502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.997 [2024-11-19 10:35:35.343839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.997 [2024-11-19 10:35:35.343877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.997 [2024-11-19 10:35:35.343885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.997 [2024-11-19 10:35:35.343891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.997 [2024-11-19 10:35:35.343896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.997 [2024-11-19 10:35:35.344477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.997 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.997 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:27.997 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.997 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.997 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.257 [2024-11-19 10:35:35.650889] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:28.257 [2024-11-19 10:35:35.650990] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:28.257 [2024-11-19 10:35:35.651017] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3c8cc7bb-c738-4afc-8fc2-a542a2bacafc 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3c8cc7bb-c738-4afc-8fc2-a542a2bacafc 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.257 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:28.516 10:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3c8cc7bb-c738-4afc-8fc2-a542a2bacafc -t 2000 00:08:28.775 [ 00:08:28.775 { 00:08:28.775 "name": "3c8cc7bb-c738-4afc-8fc2-a542a2bacafc", 00:08:28.775 "aliases": [ 00:08:28.775 "lvs/lvol" 00:08:28.775 ], 00:08:28.775 "product_name": "Logical Volume", 00:08:28.775 "block_size": 4096, 00:08:28.775 "num_blocks": 38912, 00:08:28.775 "uuid": "3c8cc7bb-c738-4afc-8fc2-a542a2bacafc", 00:08:28.775 "assigned_rate_limits": { 00:08:28.775 "rw_ios_per_sec": 0, 00:08:28.775 "rw_mbytes_per_sec": 0, 00:08:28.775 "r_mbytes_per_sec": 0, 00:08:28.775 "w_mbytes_per_sec": 0 00:08:28.775 }, 00:08:28.775 "claimed": false, 00:08:28.775 "zoned": false, 00:08:28.775 "supported_io_types": { 00:08:28.775 "read": true, 00:08:28.775 "write": true, 00:08:28.775 "unmap": true, 00:08:28.775 "flush": false, 00:08:28.775 "reset": true, 00:08:28.775 "nvme_admin": false, 00:08:28.775 "nvme_io": false, 00:08:28.775 "nvme_io_md": false, 00:08:28.775 "write_zeroes": true, 00:08:28.775 "zcopy": false, 00:08:28.775 "get_zone_info": false, 00:08:28.775 "zone_management": false, 00:08:28.775 "zone_append": false, 00:08:28.775 "compare": false, 00:08:28.775 "compare_and_write": false, 00:08:28.775 "abort": false, 00:08:28.775 "seek_hole": true, 00:08:28.775 "seek_data": true, 00:08:28.775 "copy": false, 00:08:28.775 "nvme_iov_md": false 00:08:28.775 }, 00:08:28.775 "driver_specific": { 00:08:28.775 "lvol": { 00:08:28.775 "lvol_store_uuid": "d3368d3d-4475-4676-9a48-e2d4dccfa3d8", 00:08:28.775 "base_bdev": "aio_bdev", 00:08:28.775 "thin_provision": false, 00:08:28.775 "num_allocated_clusters": 38, 00:08:28.775 "snapshot": false, 00:08:28.775 "clone": false, 00:08:28.775 "esnap_clone": false 00:08:28.775 } 00:08:28.775 } 00:08:28.775 } 00:08:28.775 ] 00:08:28.775 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:28.775 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:28.775 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:29.033 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:29.034 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:29.034 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:29.034 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:29.034 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.293 [2024-11-19 10:35:36.607685] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:29.293 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:29.552 request: 00:08:29.552 { 00:08:29.552 "uuid": "d3368d3d-4475-4676-9a48-e2d4dccfa3d8", 00:08:29.552 "method": "bdev_lvol_get_lvstores", 00:08:29.552 "req_id": 1 00:08:29.552 } 00:08:29.552 Got JSON-RPC error response 00:08:29.552 response: 00:08:29.552 { 00:08:29.552 "code": -19, 00:08:29.552 "message": "No such device" 00:08:29.552 } 00:08:29.552 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:29.552 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.552 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.552 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.552 10:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.812 aio_bdev 00:08:29.812 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3c8cc7bb-c738-4afc-8fc2-a542a2bacafc 00:08:29.812 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3c8cc7bb-c738-4afc-8fc2-a542a2bacafc 00:08:29.812 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.812 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:29.812 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.812 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.812 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:29.812 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3c8cc7bb-c738-4afc-8fc2-a542a2bacafc -t 2000 00:08:30.071 [ 00:08:30.071 { 00:08:30.071 "name": "3c8cc7bb-c738-4afc-8fc2-a542a2bacafc", 00:08:30.071 "aliases": [ 00:08:30.071 "lvs/lvol" 00:08:30.071 ], 00:08:30.071 "product_name": "Logical Volume", 00:08:30.071 "block_size": 4096, 00:08:30.071 "num_blocks": 38912, 00:08:30.071 "uuid": "3c8cc7bb-c738-4afc-8fc2-a542a2bacafc", 00:08:30.071 "assigned_rate_limits": { 00:08:30.071 "rw_ios_per_sec": 0, 00:08:30.071 "rw_mbytes_per_sec": 0, 00:08:30.071 "r_mbytes_per_sec": 0, 00:08:30.071 "w_mbytes_per_sec": 0 00:08:30.071 }, 00:08:30.071 "claimed": false, 00:08:30.071 "zoned": false, 00:08:30.071 "supported_io_types": { 00:08:30.071 "read": true, 00:08:30.071 "write": true, 00:08:30.071 "unmap": true, 00:08:30.071 "flush": false, 00:08:30.071 "reset": true, 00:08:30.071 "nvme_admin": false, 00:08:30.071 "nvme_io": false, 00:08:30.071 "nvme_io_md": false, 00:08:30.071 "write_zeroes": true, 00:08:30.071 "zcopy": false, 00:08:30.071 "get_zone_info": false, 00:08:30.071 "zone_management": false, 00:08:30.071 "zone_append": false, 00:08:30.071 "compare": false, 00:08:30.071 "compare_and_write": false, 00:08:30.071 "abort": false, 00:08:30.071 "seek_hole": true, 00:08:30.071 "seek_data": true, 00:08:30.071 "copy": false, 00:08:30.071 "nvme_iov_md": false 00:08:30.071 }, 00:08:30.071 "driver_specific": { 00:08:30.071 "lvol": { 00:08:30.071 "lvol_store_uuid": "d3368d3d-4475-4676-9a48-e2d4dccfa3d8", 00:08:30.071 "base_bdev": "aio_bdev", 00:08:30.071 "thin_provision": false, 00:08:30.071 "num_allocated_clusters": 38, 00:08:30.071 "snapshot": false, 00:08:30.071 "clone": false, 00:08:30.071 "esnap_clone": false 00:08:30.071 } 00:08:30.071 } 00:08:30.071 } 00:08:30.071 ] 00:08:30.071 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:30.071 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:30.071 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:30.330 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:30.330 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:30.330 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:30.589 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:30.589 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3c8cc7bb-c738-4afc-8fc2-a542a2bacafc 00:08:30.589 10:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3368d3d-4475-4676-9a48-e2d4dccfa3d8 00:08:30.848 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.108 00:08:31.108 real 0m17.102s 00:08:31.108 user 0m44.047s 00:08:31.108 sys 0m3.831s 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.108 ************************************ 00:08:31.108 END TEST lvs_grow_dirty 00:08:31.108 ************************************ 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:31.108 nvmf_trace.0 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.108 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.108 rmmod nvme_tcp 00:08:31.108 rmmod nvme_fabrics 00:08:31.368 rmmod nvme_keyring 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1549025 ']' 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1549025 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1549025 ']' 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1549025 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549025 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549025' 00:08:31.368 killing process with pid 1549025 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1549025 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1549025 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.368 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.906 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.906 00:08:33.906 real 0m42.012s 00:08:33.906 user 1m5.006s 00:08:33.906 sys 0m10.180s 00:08:33.906 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.906 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.906 ************************************ 00:08:33.906 END TEST nvmf_lvs_grow 00:08:33.906 ************************************ 00:08:33.906 10:35:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:33.906 10:35:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.906 10:35:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.906 10:35:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.906 ************************************ 00:08:33.906 START TEST nvmf_bdev_io_wait 00:08:33.906 ************************************ 00:08:33.906 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:33.906 * Looking for test storage... 00:08:33.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.906 --rc genhtml_branch_coverage=1 00:08:33.906 --rc genhtml_function_coverage=1 00:08:33.906 --rc genhtml_legend=1 00:08:33.906 --rc geninfo_all_blocks=1 00:08:33.906 --rc geninfo_unexecuted_blocks=1 00:08:33.906 00:08:33.906 ' 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.906 --rc genhtml_branch_coverage=1 00:08:33.906 --rc genhtml_function_coverage=1 00:08:33.906 --rc genhtml_legend=1 00:08:33.906 --rc geninfo_all_blocks=1 00:08:33.906 --rc geninfo_unexecuted_blocks=1 00:08:33.906 00:08:33.906 ' 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.906 --rc genhtml_branch_coverage=1 00:08:33.906 --rc genhtml_function_coverage=1 00:08:33.906 --rc genhtml_legend=1 00:08:33.906 --rc geninfo_all_blocks=1 00:08:33.906 --rc geninfo_unexecuted_blocks=1 00:08:33.906 00:08:33.906 ' 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.906 --rc genhtml_branch_coverage=1 00:08:33.906 --rc genhtml_function_coverage=1 00:08:33.906 --rc genhtml_legend=1 00:08:33.906 --rc geninfo_all_blocks=1 00:08:33.906 --rc geninfo_unexecuted_blocks=1 00:08:33.906 00:08:33.906 ' 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:33.906 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:33.907 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.539 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:40.540 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:40.540 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:40.540 Found net devices under 0000:86:00.0: cvl_0_0 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:40.540 Found net devices under 0000:86:00.1: cvl_0_1 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.540 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:08:40.540 00:08:40.540 --- 10.0.0.2 ping statistics --- 00:08:40.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.540 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:08:40.540 00:08:40.540 --- 10.0.0.1 ping statistics --- 00:08:40.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.540 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.540 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1553238 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1553238 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1553238 ']' 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 [2024-11-19 10:35:47.117353] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:40.541 [2024-11-19 10:35:47.117406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.541 [2024-11-19 10:35:47.194714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.541 [2024-11-19 10:35:47.237383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.541 [2024-11-19 10:35:47.237423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.541 [2024-11-19 10:35:47.237432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.541 [2024-11-19 10:35:47.237441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.541 [2024-11-19 10:35:47.237448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.541 [2024-11-19 10:35:47.239025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.541 [2024-11-19 10:35:47.239135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.541 [2024-11-19 10:35:47.239243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.541 [2024-11-19 10:35:47.239244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 [2024-11-19 10:35:47.391936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 Malloc0 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 [2024-11-19 10:35:47.447415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1553267 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1553269 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.541 { 00:08:40.541 "params": { 00:08:40.541 "name": "Nvme$subsystem", 00:08:40.541 "trtype": "$TEST_TRANSPORT", 00:08:40.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.541 "adrfam": "ipv4", 00:08:40.541 "trsvcid": "$NVMF_PORT", 00:08:40.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.541 "hdgst": ${hdgst:-false}, 00:08:40.541 "ddgst": ${ddgst:-false} 00:08:40.541 }, 00:08:40.541 "method": "bdev_nvme_attach_controller" 00:08:40.541 } 00:08:40.541 EOF 00:08:40.541 )") 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1553271 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1553274 00:08:40.541 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.541 { 00:08:40.542 "params": { 00:08:40.542 "name": "Nvme$subsystem", 00:08:40.542 "trtype": "$TEST_TRANSPORT", 00:08:40.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.542 "adrfam": "ipv4", 00:08:40.542 "trsvcid": "$NVMF_PORT", 00:08:40.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.542 "hdgst": ${hdgst:-false}, 00:08:40.542 "ddgst": ${ddgst:-false} 00:08:40.542 }, 00:08:40.542 "method": "bdev_nvme_attach_controller" 00:08:40.542 } 00:08:40.542 EOF 00:08:40.542 )") 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.542 { 00:08:40.542 "params": { 00:08:40.542 "name": "Nvme$subsystem", 00:08:40.542 "trtype": "$TEST_TRANSPORT", 00:08:40.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.542 "adrfam": "ipv4", 00:08:40.542 "trsvcid": "$NVMF_PORT", 00:08:40.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.542 "hdgst": ${hdgst:-false}, 00:08:40.542 "ddgst": ${ddgst:-false} 00:08:40.542 }, 00:08:40.542 "method": "bdev_nvme_attach_controller" 00:08:40.542 } 00:08:40.542 EOF 00:08:40.542 )") 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.542 { 00:08:40.542 "params": { 00:08:40.542 "name": "Nvme$subsystem", 00:08:40.542 "trtype": "$TEST_TRANSPORT", 00:08:40.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.542 "adrfam": "ipv4", 00:08:40.542 "trsvcid": "$NVMF_PORT", 00:08:40.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.542 "hdgst": ${hdgst:-false}, 00:08:40.542 "ddgst": ${ddgst:-false} 00:08:40.542 }, 00:08:40.542 "method": "bdev_nvme_attach_controller" 00:08:40.542 } 00:08:40.542 EOF 00:08:40.542 )") 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1553267 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.542 "params": { 00:08:40.542 "name": "Nvme1", 00:08:40.542 "trtype": "tcp", 00:08:40.542 "traddr": "10.0.0.2", 00:08:40.542 "adrfam": "ipv4", 00:08:40.542 "trsvcid": "4420", 00:08:40.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.542 "hdgst": false, 00:08:40.542 "ddgst": false 00:08:40.542 }, 00:08:40.542 "method": "bdev_nvme_attach_controller" 00:08:40.542 }' 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.542 "params": { 00:08:40.542 "name": "Nvme1", 00:08:40.542 "trtype": "tcp", 00:08:40.542 "traddr": "10.0.0.2", 00:08:40.542 "adrfam": "ipv4", 00:08:40.542 "trsvcid": "4420", 00:08:40.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.542 "hdgst": false, 00:08:40.542 "ddgst": false 00:08:40.542 }, 00:08:40.542 "method": "bdev_nvme_attach_controller" 00:08:40.542 }' 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.542 "params": { 00:08:40.542 "name": "Nvme1", 00:08:40.542 "trtype": "tcp", 00:08:40.542 "traddr": "10.0.0.2", 00:08:40.542 "adrfam": "ipv4", 00:08:40.542 "trsvcid": "4420", 00:08:40.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.542 "hdgst": false, 00:08:40.542 "ddgst": false 00:08:40.542 }, 00:08:40.542 "method": "bdev_nvme_attach_controller" 00:08:40.542 }' 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:40.542 10:35:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.542 "params": { 00:08:40.542 "name": "Nvme1", 00:08:40.542 "trtype": "tcp", 00:08:40.542 "traddr": "10.0.0.2", 00:08:40.542 "adrfam": "ipv4", 00:08:40.542 "trsvcid": "4420", 00:08:40.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.542 "hdgst": false, 00:08:40.542 "ddgst": false 00:08:40.542 }, 00:08:40.542 "method": "bdev_nvme_attach_controller" 00:08:40.542 }' 00:08:40.542 [2024-11-19 10:35:47.499096] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:40.542 [2024-11-19 10:35:47.499142] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:40.542 [2024-11-19 10:35:47.502286] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:40.542 [2024-11-19 10:35:47.502331] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:40.542 [2024-11-19 10:35:47.503543] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:40.542 [2024-11-19 10:35:47.503584] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:40.542 [2024-11-19 10:35:47.503937] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:40.543 [2024-11-19 10:35:47.503983] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:40.543 [2024-11-19 10:35:47.688200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.543 [2024-11-19 10:35:47.731344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:40.543 [2024-11-19 10:35:47.780604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.543 [2024-11-19 10:35:47.833128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.543 [2024-11-19 10:35:47.833872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:40.543 [2024-11-19 10:35:47.876080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:40.543 [2024-11-19 10:35:47.895945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.543 [2024-11-19 10:35:47.938801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:40.543 Running I/O for 1 seconds... 00:08:40.802 Running I/O for 1 seconds... 00:08:40.802 Running I/O for 1 seconds... 00:08:40.802 Running I/O for 1 seconds... 00:08:41.740 12461.00 IOPS, 48.68 MiB/s 00:08:41.740 Latency(us) 00:08:41.740 [2024-11-19T09:35:49.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.740 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:41.740 Nvme1n1 : 1.01 12513.77 48.88 0.00 0.00 10193.69 1624.15 12822.26 00:08:41.740 [2024-11-19T09:35:49.189Z] =================================================================================================================== 00:08:41.740 [2024-11-19T09:35:49.189Z] Total : 12513.77 48.88 0.00 0.00 10193.69 1624.15 12822.26 00:08:41.740 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1553269 00:08:41.740 10949.00 IOPS, 42.77 MiB/s 00:08:41.740 Latency(us) 00:08:41.740 [2024-11-19T09:35:49.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.740 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:41.740 Nvme1n1 : 1.01 11020.06 43.05 0.00 0.00 11580.08 4359.57 20629.59 00:08:41.740 [2024-11-19T09:35:49.189Z] =================================================================================================================== 00:08:41.740 [2024-11-19T09:35:49.189Z] Total : 11020.06 43.05 0.00 0.00 11580.08 4359.57 20629.59 00:08:41.740 243912.00 IOPS, 952.78 MiB/s 00:08:41.740 Latency(us) 00:08:41.740 [2024-11-19T09:35:49.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.740 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:41.740 Nvme1n1 : 1.00 243531.81 951.30 0.00 0.00 523.38 236.86 1545.79 00:08:41.740 [2024-11-19T09:35:49.189Z] =================================================================================================================== 00:08:41.740 [2024-11-19T09:35:49.189Z] Total : 243531.81 951.30 0.00 0.00 523.38 236.86 1545.79 00:08:41.740 9747.00 IOPS, 38.07 MiB/s 00:08:41.740 Latency(us) 00:08:41.740 [2024-11-19T09:35:49.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.740 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:41.740 Nvme1n1 : 1.01 9818.97 38.36 0.00 0.00 12994.60 4758.48 26442.35 00:08:41.740 [2024-11-19T09:35:49.189Z] =================================================================================================================== 00:08:41.740 [2024-11-19T09:35:49.189Z] Total : 9818.97 38.36 0.00 0.00 12994.60 4758.48 26442.35 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1553271 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1553274 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.999 rmmod nvme_tcp 00:08:41.999 rmmod nvme_fabrics 00:08:41.999 rmmod nvme_keyring 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1553238 ']' 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1553238 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1553238 ']' 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1553238 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1553238 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1553238' 00:08:41.999 killing process with pid 1553238 00:08:41.999 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1553238 00:08:42.000 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1553238 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.259 10:35:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.795 00:08:44.795 real 0m10.722s 00:08:44.795 user 0m16.259s 00:08:44.795 sys 0m6.155s 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.795 ************************************ 00:08:44.795 END TEST nvmf_bdev_io_wait 00:08:44.795 ************************************ 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.795 ************************************ 00:08:44.795 START TEST nvmf_queue_depth 00:08:44.795 ************************************ 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:44.795 * Looking for test storage... 00:08:44.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.795 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.796 --rc genhtml_branch_coverage=1 00:08:44.796 --rc genhtml_function_coverage=1 00:08:44.796 --rc genhtml_legend=1 00:08:44.796 --rc geninfo_all_blocks=1 00:08:44.796 --rc geninfo_unexecuted_blocks=1 00:08:44.796 00:08:44.796 ' 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.796 --rc genhtml_branch_coverage=1 00:08:44.796 --rc genhtml_function_coverage=1 00:08:44.796 --rc genhtml_legend=1 00:08:44.796 --rc geninfo_all_blocks=1 00:08:44.796 --rc geninfo_unexecuted_blocks=1 00:08:44.796 00:08:44.796 ' 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.796 --rc genhtml_branch_coverage=1 00:08:44.796 --rc genhtml_function_coverage=1 00:08:44.796 --rc genhtml_legend=1 00:08:44.796 --rc geninfo_all_blocks=1 00:08:44.796 --rc geninfo_unexecuted_blocks=1 00:08:44.796 00:08:44.796 ' 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.796 --rc genhtml_branch_coverage=1 00:08:44.796 --rc genhtml_function_coverage=1 00:08:44.796 --rc genhtml_legend=1 00:08:44.796 --rc geninfo_all_blocks=1 00:08:44.796 --rc geninfo_unexecuted_blocks=1 00:08:44.796 00:08:44.796 ' 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.796 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.797 10:35:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.368 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:51.369 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:51.369 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:51.369 Found net devices under 0000:86:00.0: cvl_0_0 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:51.369 Found net devices under 0000:86:00.1: cvl_0_1 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:08:51.369 00:08:51.369 --- 10.0.0.2 ping statistics --- 00:08:51.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.369 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:51.369 00:08:51.369 --- 10.0.0.1 ping statistics --- 00:08:51.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.369 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.369 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1557279 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1557279 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1557279 ']' 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.370 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 [2024-11-19 10:35:58.038727] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:51.370 [2024-11-19 10:35:58.038772] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.370 [2024-11-19 10:35:58.121245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.370 [2024-11-19 10:35:58.163894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.370 [2024-11-19 10:35:58.163928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.370 [2024-11-19 10:35:58.163936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.370 [2024-11-19 10:35:58.163943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.370 [2024-11-19 10:35:58.163953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.370 [2024-11-19 10:35:58.164424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 [2024-11-19 10:35:58.308168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 Malloc0 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 [2024-11-19 10:35:58.358533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1557307 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1557307 /var/tmp/bdevperf.sock 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1557307 ']' 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:51.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 [2024-11-19 10:35:58.411199] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:08:51.370 [2024-11-19 10:35:58.411240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557307 ] 00:08:51.370 [2024-11-19 10:35:58.486337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.370 [2024-11-19 10:35:58.527502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.370 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.629 NVMe0n1 00:08:51.629 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.629 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:51.629 Running I/O for 10 seconds... 00:08:53.502 11432.00 IOPS, 44.66 MiB/s [2024-11-19T09:36:02.325Z] 11769.00 IOPS, 45.97 MiB/s [2024-11-19T09:36:03.263Z] 11944.33 IOPS, 46.66 MiB/s [2024-11-19T09:36:04.200Z] 12035.50 IOPS, 47.01 MiB/s [2024-11-19T09:36:05.136Z] 12118.20 IOPS, 47.34 MiB/s [2024-11-19T09:36:06.073Z] 12177.83 IOPS, 47.57 MiB/s [2024-11-19T09:36:07.010Z] 12256.00 IOPS, 47.88 MiB/s [2024-11-19T09:36:07.949Z] 12259.62 IOPS, 47.89 MiB/s [2024-11-19T09:36:09.327Z] 12271.89 IOPS, 47.94 MiB/s [2024-11-19T09:36:09.327Z] 12275.50 IOPS, 47.95 MiB/s 00:09:01.878 Latency(us) 00:09:01.878 [2024-11-19T09:36:09.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.878 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:01.878 Verification LBA range: start 0x0 length 0x4000 00:09:01.878 NVMe0n1 : 10.05 12315.00 48.11 0.00 0.00 82877.25 9175.04 53112.65 00:09:01.878 [2024-11-19T09:36:09.327Z] =================================================================================================================== 00:09:01.878 [2024-11-19T09:36:09.327Z] Total : 12315.00 48.11 0.00 0.00 82877.25 9175.04 53112.65 00:09:01.878 { 00:09:01.878 "results": [ 00:09:01.878 { 00:09:01.878 "job": "NVMe0n1", 00:09:01.878 "core_mask": "0x1", 00:09:01.878 "workload": "verify", 00:09:01.878 "status": "finished", 00:09:01.878 "verify_range": { 00:09:01.878 "start": 0, 00:09:01.878 "length": 16384 00:09:01.878 }, 00:09:01.878 "queue_depth": 1024, 00:09:01.878 "io_size": 4096, 00:09:01.878 "runtime": 10.051079, 00:09:01.878 "iops": 12314.996230752937, 00:09:01.878 "mibps": 48.10545402637866, 00:09:01.878 "io_failed": 0, 00:09:01.878 "io_timeout": 0, 00:09:01.878 "avg_latency_us": 82877.2535830444, 00:09:01.878 "min_latency_us": 9175.04, 00:09:01.878 "max_latency_us": 53112.653913043476 00:09:01.878 } 00:09:01.878 ], 00:09:01.878 "core_count": 1 00:09:01.878 } 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1557307 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1557307 ']' 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1557307 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1557307 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1557307' 00:09:01.878 killing process with pid 1557307 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1557307 00:09:01.878 Received shutdown signal, test time was about 10.000000 seconds 00:09:01.878 00:09:01.878 Latency(us) 00:09:01.878 [2024-11-19T09:36:09.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.878 [2024-11-19T09:36:09.327Z] =================================================================================================================== 00:09:01.878 [2024-11-19T09:36:09.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1557307 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.878 rmmod nvme_tcp 00:09:01.878 rmmod nvme_fabrics 00:09:01.878 rmmod nvme_keyring 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1557279 ']' 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1557279 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1557279 ']' 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1557279 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.878 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1557279 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1557279' 00:09:02.138 killing process with pid 1557279 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1557279 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1557279 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.138 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:04.676 00:09:04.676 real 0m19.858s 00:09:04.676 user 0m23.199s 00:09:04.676 sys 0m6.146s 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.676 ************************************ 00:09:04.676 END TEST nvmf_queue_depth 00:09:04.676 ************************************ 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.676 ************************************ 00:09:04.676 START TEST nvmf_target_multipath 00:09:04.676 ************************************ 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:04.676 * Looking for test storage... 00:09:04.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:04.676 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:04.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.677 --rc genhtml_branch_coverage=1 00:09:04.677 --rc genhtml_function_coverage=1 00:09:04.677 --rc genhtml_legend=1 00:09:04.677 --rc geninfo_all_blocks=1 00:09:04.677 --rc geninfo_unexecuted_blocks=1 00:09:04.677 00:09:04.677 ' 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:04.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.677 --rc genhtml_branch_coverage=1 00:09:04.677 --rc genhtml_function_coverage=1 00:09:04.677 --rc genhtml_legend=1 00:09:04.677 --rc geninfo_all_blocks=1 00:09:04.677 --rc geninfo_unexecuted_blocks=1 00:09:04.677 00:09:04.677 ' 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:04.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.677 --rc genhtml_branch_coverage=1 00:09:04.677 --rc genhtml_function_coverage=1 00:09:04.677 --rc genhtml_legend=1 00:09:04.677 --rc geninfo_all_blocks=1 00:09:04.677 --rc geninfo_unexecuted_blocks=1 00:09:04.677 00:09:04.677 ' 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:04.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.677 --rc genhtml_branch_coverage=1 00:09:04.677 --rc genhtml_function_coverage=1 00:09:04.677 --rc genhtml_legend=1 00:09:04.677 --rc geninfo_all_blocks=1 00:09:04.677 --rc geninfo_unexecuted_blocks=1 00:09:04.677 00:09:04.677 ' 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.677 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:04.678 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:11.251 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.251 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.251 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.251 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.251 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.251 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:11.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:11.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:11.252 Found net devices under 0000:86:00.0: cvl_0_0 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:11.252 Found net devices under 0000:86:00.1: cvl_0_1 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:11.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:09:11.252 00:09:11.252 --- 10.0.0.2 ping statistics --- 00:09:11.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.252 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:11.252 00:09:11.252 --- 10.0.0.1 ping statistics --- 00:09:11.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.252 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:11.252 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:11.253 only one NIC for nvmf test 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.253 rmmod nvme_tcp 00:09:11.253 rmmod nvme_fabrics 00:09:11.253 rmmod nvme_keyring 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.253 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.631 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.632 00:09:12.632 real 0m8.387s 00:09:12.632 user 0m1.811s 00:09:12.632 sys 0m4.607s 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.632 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.632 ************************************ 00:09:12.632 END TEST nvmf_target_multipath 00:09:12.632 ************************************ 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.891 ************************************ 00:09:12.891 START TEST nvmf_zcopy 00:09:12.891 ************************************ 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.891 * Looking for test storage... 00:09:12.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.891 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:12.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.892 --rc genhtml_branch_coverage=1 00:09:12.892 --rc genhtml_function_coverage=1 00:09:12.892 --rc genhtml_legend=1 00:09:12.892 --rc geninfo_all_blocks=1 00:09:12.892 --rc geninfo_unexecuted_blocks=1 00:09:12.892 00:09:12.892 ' 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:12.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.892 --rc genhtml_branch_coverage=1 00:09:12.892 --rc genhtml_function_coverage=1 00:09:12.892 --rc genhtml_legend=1 00:09:12.892 --rc geninfo_all_blocks=1 00:09:12.892 --rc geninfo_unexecuted_blocks=1 00:09:12.892 00:09:12.892 ' 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:12.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.892 --rc genhtml_branch_coverage=1 00:09:12.892 --rc genhtml_function_coverage=1 00:09:12.892 --rc genhtml_legend=1 00:09:12.892 --rc geninfo_all_blocks=1 00:09:12.892 --rc geninfo_unexecuted_blocks=1 00:09:12.892 00:09:12.892 ' 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:12.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.892 --rc genhtml_branch_coverage=1 00:09:12.892 --rc genhtml_function_coverage=1 00:09:12.892 --rc genhtml_legend=1 00:09:12.892 --rc geninfo_all_blocks=1 00:09:12.892 --rc geninfo_unexecuted_blocks=1 00:09:12.892 00:09:12.892 ' 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.892 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.151 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:19.748 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:19.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:19.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:19.749 Found net devices under 0000:86:00.0: cvl_0_0 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:19.749 Found net devices under 0000:86:00.1: cvl_0_1 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:19.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:09:19.749 00:09:19.749 --- 10.0.0.2 ping statistics --- 00:09:19.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.749 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:09:19.749 00:09:19.749 --- 10.0.0.1 ping statistics --- 00:09:19.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.749 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1566727 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1566727 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1566727 ']' 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.749 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.749 [2024-11-19 10:36:26.369468] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:19.749 [2024-11-19 10:36:26.369517] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.749 [2024-11-19 10:36:26.432854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.749 [2024-11-19 10:36:26.473779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.750 [2024-11-19 10:36:26.473815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.750 [2024-11-19 10:36:26.473822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.750 [2024-11-19 10:36:26.473829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.750 [2024-11-19 10:36:26.473835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.750 [2024-11-19 10:36:26.474455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 [2024-11-19 10:36:26.613960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 [2024-11-19 10:36:26.638137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 malloc0 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:19.750 { 00:09:19.750 "params": { 00:09:19.750 "name": "Nvme$subsystem", 00:09:19.750 "trtype": "$TEST_TRANSPORT", 00:09:19.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:19.750 "adrfam": "ipv4", 00:09:19.750 "trsvcid": "$NVMF_PORT", 00:09:19.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:19.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:19.750 "hdgst": ${hdgst:-false}, 00:09:19.750 "ddgst": ${ddgst:-false} 00:09:19.750 }, 00:09:19.750 "method": "bdev_nvme_attach_controller" 00:09:19.750 } 00:09:19.750 EOF 00:09:19.750 )") 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:19.750 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:19.750 "params": { 00:09:19.750 "name": "Nvme1", 00:09:19.750 "trtype": "tcp", 00:09:19.750 "traddr": "10.0.0.2", 00:09:19.750 "adrfam": "ipv4", 00:09:19.750 "trsvcid": "4420", 00:09:19.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:19.750 "hdgst": false, 00:09:19.750 "ddgst": false 00:09:19.750 }, 00:09:19.750 "method": "bdev_nvme_attach_controller" 00:09:19.750 }' 00:09:19.750 [2024-11-19 10:36:26.719422] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:19.750 [2024-11-19 10:36:26.719465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566747 ] 00:09:19.750 [2024-11-19 10:36:26.795476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.750 [2024-11-19 10:36:26.836892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.750 Running I/O for 10 seconds... 00:09:22.058 8433.00 IOPS, 65.88 MiB/s [2024-11-19T09:36:30.442Z] 8492.50 IOPS, 66.35 MiB/s [2024-11-19T09:36:31.377Z] 8507.67 IOPS, 66.47 MiB/s [2024-11-19T09:36:32.310Z] 8488.50 IOPS, 66.32 MiB/s [2024-11-19T09:36:33.242Z] 8498.20 IOPS, 66.39 MiB/s [2024-11-19T09:36:34.175Z] 8512.33 IOPS, 66.50 MiB/s [2024-11-19T09:36:35.550Z] 8521.29 IOPS, 66.57 MiB/s [2024-11-19T09:36:36.488Z] 8524.62 IOPS, 66.60 MiB/s [2024-11-19T09:36:37.423Z] 8527.78 IOPS, 66.62 MiB/s [2024-11-19T09:36:37.423Z] 8530.00 IOPS, 66.64 MiB/s 00:09:29.974 Latency(us) 00:09:29.974 [2024-11-19T09:36:37.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.974 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:29.974 Verification LBA range: start 0x0 length 0x1000 00:09:29.974 Nvme1n1 : 10.01 8534.76 66.68 0.00 0.00 14955.58 2407.74 22795.13 00:09:29.974 [2024-11-19T09:36:37.423Z] =================================================================================================================== 00:09:29.974 [2024-11-19T09:36:37.423Z] Total : 8534.76 66.68 0.00 0.00 14955.58 2407.74 22795.13 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1568585 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.974 { 00:09:29.974 "params": { 00:09:29.974 "name": "Nvme$subsystem", 00:09:29.974 "trtype": "$TEST_TRANSPORT", 00:09:29.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.974 "adrfam": "ipv4", 00:09:29.974 "trsvcid": "$NVMF_PORT", 00:09:29.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.974 "hdgst": ${hdgst:-false}, 00:09:29.974 "ddgst": ${ddgst:-false} 00:09:29.974 }, 00:09:29.974 "method": "bdev_nvme_attach_controller" 00:09:29.974 } 00:09:29.974 EOF 00:09:29.974 )") 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:29.974 [2024-11-19 10:36:37.316314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.974 [2024-11-19 10:36:37.316355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:29.974 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.974 "params": { 00:09:29.974 "name": "Nvme1", 00:09:29.974 "trtype": "tcp", 00:09:29.974 "traddr": "10.0.0.2", 00:09:29.974 "adrfam": "ipv4", 00:09:29.974 "trsvcid": "4420", 00:09:29.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.974 "hdgst": false, 00:09:29.974 "ddgst": false 00:09:29.974 }, 00:09:29.974 "method": "bdev_nvme_attach_controller" 00:09:29.974 }' 00:09:29.974 [2024-11-19 10:36:37.328316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.974 [2024-11-19 10:36:37.328329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.974 [2024-11-19 10:36:37.340343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.974 [2024-11-19 10:36:37.340353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.974 [2024-11-19 10:36:37.352375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.974 [2024-11-19 10:36:37.352386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.974 [2024-11-19 10:36:37.357338] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:29.974 [2024-11-19 10:36:37.357383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568585 ] 00:09:29.975 [2024-11-19 10:36:37.364406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.975 [2024-11-19 10:36:37.364417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.975 [2024-11-19 10:36:37.376436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.975 [2024-11-19 10:36:37.376446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.975 [2024-11-19 10:36:37.388471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.975 [2024-11-19 10:36:37.388480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.975 [2024-11-19 10:36:37.400512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.975 [2024-11-19 10:36:37.400527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.975 [2024-11-19 10:36:37.412538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.975 [2024-11-19 10:36:37.412547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.234 [2024-11-19 10:36:37.424566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.234 [2024-11-19 10:36:37.424581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.234 [2024-11-19 10:36:37.432627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.234 [2024-11-19 10:36:37.436601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.234 [2024-11-19 10:36:37.436611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.234 [2024-11-19 10:36:37.448632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.234 [2024-11-19 10:36:37.448646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.234 [2024-11-19 10:36:37.460662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.234 [2024-11-19 10:36:37.460671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.234 [2024-11-19 10:36:37.472697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.234 [2024-11-19 10:36:37.472708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.234 [2024-11-19 10:36:37.473478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.234 [2024-11-19 10:36:37.484738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.234 [2024-11-19 10:36:37.484753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.234 [2024-11-19 10:36:37.496771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.496789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.508799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.508813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.520829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.520841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.532864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.532876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.544895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.544907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.556923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.556932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.568990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.569011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.581006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.581022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.593051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.593066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.605066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.605080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.617089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.617098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.629128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.629145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 Running I/O for 5 seconds... 00:09:30.235 [2024-11-19 10:36:37.645348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.645367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.656488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.656507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.235 [2024-11-19 10:36:37.671157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.235 [2024-11-19 10:36:37.671176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.685090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.685108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.699853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.699871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.715518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.715536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.729719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.729742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.745084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.745103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.759344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.759372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.773700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.773719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.788050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.788069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.799057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.799076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.813632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.813650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.829490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.494 [2024-11-19 10:36:37.829508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.494 [2024-11-19 10:36:37.843549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.495 [2024-11-19 10:36:37.843567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.495 [2024-11-19 10:36:37.857504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.495 [2024-11-19 10:36:37.857522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.495 [2024-11-19 10:36:37.871770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.495 [2024-11-19 10:36:37.871788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.495 [2024-11-19 10:36:37.882528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.495 [2024-11-19 10:36:37.882545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.495 [2024-11-19 10:36:37.897056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.495 [2024-11-19 10:36:37.897074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.495 [2024-11-19 10:36:37.910854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.495 [2024-11-19 10:36:37.910873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.495 [2024-11-19 10:36:37.925257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.495 [2024-11-19 10:36:37.925276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.495 [2024-11-19 10:36:37.939378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.495 [2024-11-19 10:36:37.939397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:37.953337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:37.953355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:37.967205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:37.967223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:37.981610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:37.981628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:37.995544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:37.995562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.009954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.009979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.020873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.020891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.035706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.035725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.050110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.050138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.064572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.064590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.078834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.078859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.092709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.092731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.107157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.107175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.121115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.121134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.135210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.135229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.149306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.149325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.163217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.163236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.174116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.174135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.183911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.183930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.752 [2024-11-19 10:36:38.198780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.752 [2024-11-19 10:36:38.198800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.212559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.212578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.226980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.226998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.238493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.238512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.248121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.248140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.257863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.257886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.272463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.272482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.286394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.286412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.300828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.300846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.315147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.315165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.329367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.329386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.344014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.344032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.359664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.359683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.374000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.374019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.387989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.388008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.402280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.402299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.416433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.416451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.430788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.430806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.011 [2024-11-19 10:36:38.446228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.011 [2024-11-19 10:36:38.446247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.460589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.460608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.474527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.474546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.488724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.488743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.502791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.502810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.517329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.517348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.528419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.528443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.542901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.542920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.556457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.556476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.571017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.571036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.585162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.585182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.596865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.596884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.611385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.611404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.625543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.625562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 16286.00 IOPS, 127.23 MiB/s [2024-11-19T09:36:38.719Z] [2024-11-19 10:36:38.639974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.639993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.654260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.654278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.665557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.665575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.674922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.674940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.684562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.684580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.694134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.694152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.270 [2024-11-19 10:36:38.708926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.270 [2024-11-19 10:36:38.708944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.723008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.723026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.737412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.737430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.751556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.751576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.762420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.762438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.776682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.776700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.790503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.790521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.804539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.804557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.818827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.818845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.833462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.833480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.844638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.844656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.859259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.859278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.873287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.873306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.887555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.887574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.902155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.902174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.913481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.913499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.927799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.927817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.941840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.941862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.956032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.956050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.529 [2024-11-19 10:36:38.964991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.529 [2024-11-19 10:36:38.965009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:38.979589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:38.979607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:38.993750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:38.993768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.005254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.005271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.014681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.014699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.029364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.029381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.043833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.043850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.059124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.059142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.073571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.073589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.087685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.087703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.101641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.101659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.115862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.115879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.129754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.129772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.143761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.143779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.158247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.158266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.169391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.169409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.184431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.184448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.199825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.199844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.214020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.214039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.788 [2024-11-19 10:36:39.228007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.788 [2024-11-19 10:36:39.228027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.242217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.242241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.256891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.256909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.272117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.272136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.286849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.286867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.302028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.302046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.316430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.316448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.330291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.330309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.344587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.344605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.358551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.358569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.372817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.372835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.386667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.386685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.400899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.400917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.415647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.415665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.431381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.431400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.445801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.445819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.457126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.457143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.471909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.471928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.047 [2024-11-19 10:36:39.485640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.047 [2024-11-19 10:36:39.485658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.500274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.500292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.515384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.515402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.530025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.530043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.541626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.541644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.556222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.556239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.569404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.569422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.583817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.583837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.598047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.598065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.612414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.612433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.627034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.627053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.638462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.638481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 16375.50 IOPS, 127.93 MiB/s [2024-11-19T09:36:39.755Z] [2024-11-19 10:36:39.653338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.653357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.664621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.664639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.679452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.679470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.690497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.690516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.705254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.705273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.716284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.716302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.725931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.725967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.740690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.740708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.306 [2024-11-19 10:36:39.754831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.306 [2024-11-19 10:36:39.754850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.766265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.766284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.780723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.780742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.794574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.794593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.809424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.809448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.824703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.824722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.839008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.839031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.852982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.853001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.867079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.867097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.881648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.881667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.896078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.896097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.912068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.912088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.922944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.922968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.937072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.937091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.951372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.951391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.965717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.965735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.976867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.976886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:39.991470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:39.991488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.565 [2024-11-19 10:36:40.005569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.565 [2024-11-19 10:36:40.005587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.823 [2024-11-19 10:36:40.020559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.823 [2024-11-19 10:36:40.020579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.823 [2024-11-19 10:36:40.036014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.823 [2024-11-19 10:36:40.036033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.823 [2024-11-19 10:36:40.046033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.823 [2024-11-19 10:36:40.046052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.823 [2024-11-19 10:36:40.055153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.823 [2024-11-19 10:36:40.055172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.823 [2024-11-19 10:36:40.064069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.823 [2024-11-19 10:36:40.064092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.823 [2024-11-19 10:36:40.073607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.823 [2024-11-19 10:36:40.073626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.823 [2024-11-19 10:36:40.088165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.823 [2024-11-19 10:36:40.088183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.823 [2024-11-19 10:36:40.101064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.101083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.115691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.115710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.129957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.129975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.141480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.141497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.156018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.156036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.170184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.170201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.184677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.184696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.195963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.195982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.210365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.210384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.224718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.224737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.238196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.238214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.252687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.252707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.824 [2024-11-19 10:36:40.266599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.824 [2024-11-19 10:36:40.266617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.280969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.280988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.295141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.295159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.308757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.308776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.322994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.323017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.333607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.333625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.348671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.348689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.364327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.364345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.378365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.378384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.392594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.392612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.403952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.403970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.418643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.418661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.433030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.433048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.443753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.443771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.458702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.458720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.472890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.472908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.487168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.487186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.498269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.498287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.512786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.512805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.082 [2024-11-19 10:36:40.526993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.082 [2024-11-19 10:36:40.527011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.541081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.541100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.555066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.555085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.568872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.568890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.583934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.583959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.598868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.598886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.613423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.613441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.628657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.628675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 16351.67 IOPS, 127.75 MiB/s [2024-11-19T09:36:40.790Z] [2024-11-19 10:36:40.643103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.643121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.657021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.657038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.666497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.666514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.681509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.681526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.696723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.696741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.710647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.710666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.724309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.724326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.734134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.734152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.748377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.748395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.762689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.762707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.776943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.776967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.341 [2024-11-19 10:36:40.788319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.341 [2024-11-19 10:36:40.788336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.802799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.802817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.816737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.816756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.830951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.830969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.845278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.845296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.859222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.859240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.873759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.873778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.887679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.887697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.902385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.902403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.918076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.918102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.932568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.932587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.946868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.946886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.960615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.960633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.974919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.974936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:40.989351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:40.989370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:41.003522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:41.003542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:41.014557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:41.014576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:41.029519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:41.029538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.600 [2024-11-19 10:36:41.044405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.600 [2024-11-19 10:36:41.044424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.059164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.059183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.073338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.073356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.084137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.084155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.098872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.098897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.109651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.109669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.123935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.123961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.138159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.138177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.149768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.149786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.164345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.164363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.178117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.178136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.187935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.187960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.202522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.202541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.216351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.216369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.858 [2024-11-19 10:36:41.230445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.858 [2024-11-19 10:36:41.230463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.859 [2024-11-19 10:36:41.244466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.859 [2024-11-19 10:36:41.244484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.859 [2024-11-19 10:36:41.258538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.859 [2024-11-19 10:36:41.258557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.859 [2024-11-19 10:36:41.272902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.859 [2024-11-19 10:36:41.272922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.859 [2024-11-19 10:36:41.287000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.859 [2024-11-19 10:36:41.287019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.859 [2024-11-19 10:36:41.300932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.859 [2024-11-19 10:36:41.300959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.315236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.315255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.329800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.329820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.341025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.341044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.355894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.355917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.370211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.370230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.384450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.384468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.398766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.398784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.412697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.412715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.427171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.427189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.441436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.441454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.456708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.456726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.472177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.472195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.486519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.486537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.500665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.500683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.515392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.515409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.530578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.530597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.545425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.545443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.117 [2024-11-19 10:36:41.560444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.117 [2024-11-19 10:36:41.560461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.574833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.574851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.589247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.589265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.603486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.603504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.618040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.618058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.632314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.632337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 16386.25 IOPS, 128.02 MiB/s [2024-11-19T09:36:41.823Z] [2024-11-19 10:36:41.646089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.646117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.660329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.660347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.674418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.674436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.688274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.688292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.702913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.702932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.718210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.374 [2024-11-19 10:36:41.718229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.374 [2024-11-19 10:36:41.732924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.375 [2024-11-19 10:36:41.732941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.375 [2024-11-19 10:36:41.748806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.375 [2024-11-19 10:36:41.748824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.375 [2024-11-19 10:36:41.758536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.375 [2024-11-19 10:36:41.758554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.375 [2024-11-19 10:36:41.773244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.375 [2024-11-19 10:36:41.773262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.375 [2024-11-19 10:36:41.786805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.375 [2024-11-19 10:36:41.786822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.375 [2024-11-19 10:36:41.801777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.375 [2024-11-19 10:36:41.801794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.375 [2024-11-19 10:36:41.817004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.375 [2024-11-19 10:36:41.817022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.831564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.831582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.845730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.845748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.860124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.860142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.874399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.874417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.888360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.888378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.902253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.902276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.917078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.917094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.932691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.932709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.946568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.946586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.960895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.960914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.971976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.971995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.981668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.981686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:41.991318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:41.991336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:42.006448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:42.006466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:42.022137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:42.022156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:42.037059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:42.037077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:42.052438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.632 [2024-11-19 10:36:42.052456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.632 [2024-11-19 10:36:42.066853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.633 [2024-11-19 10:36:42.066872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.633 [2024-11-19 10:36:42.081214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.633 [2024-11-19 10:36:42.081232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.095377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.095395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.109434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.109453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.123864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.123881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.139146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.139163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.153435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.153454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.167442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.167460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.182152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.182170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.195934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.195957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.210574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.210592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.221598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.221616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.236565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.236583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.250527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.250544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.265018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.265036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.278806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.278824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.293776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.293795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.309601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.309620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.323808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.323826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.891 [2024-11-19 10:36:42.338582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.891 [2024-11-19 10:36:42.338600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.149 [2024-11-19 10:36:42.354080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.149 [2024-11-19 10:36:42.354098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.149 [2024-11-19 10:36:42.368695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.149 [2024-11-19 10:36:42.368714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.149 [2024-11-19 10:36:42.384177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.149 [2024-11-19 10:36:42.384196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.149 [2024-11-19 10:36:42.398429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.149 [2024-11-19 10:36:42.398448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.149 [2024-11-19 10:36:42.412506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.149 [2024-11-19 10:36:42.412525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.149 [2024-11-19 10:36:42.426980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.149 [2024-11-19 10:36:42.426999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.149 [2024-11-19 10:36:42.438279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.149 [2024-11-19 10:36:42.438298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.149 [2024-11-19 10:36:42.452837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.452856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.466456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.466475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.481101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.481120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.496503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.496522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.511718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.511737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.522844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.522862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.537655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.537673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.551716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.551735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.565500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.565519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.579749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.579768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.150 [2024-11-19 10:36:42.590773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.150 [2024-11-19 10:36:42.590791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.605421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.605438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.616768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.616786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.631141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.631158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.644907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.644927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 16386.00 IOPS, 128.02 MiB/s 00:09:35.409 Latency(us) 00:09:35.409 [2024-11-19T09:36:42.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.409 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:35.409 Nvme1n1 : 5.01 16387.95 128.03 0.00 0.00 7803.38 3647.22 18350.08 00:09:35.409 [2024-11-19T09:36:42.858Z] =================================================================================================================== 00:09:35.409 [2024-11-19T09:36:42.858Z] Total : 16387.95 128.03 0.00 0.00 7803.38 3647.22 18350.08 00:09:35.409 [2024-11-19 10:36:42.655096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.655113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.667129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.667144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.679174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.679190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.691197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.691216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.703225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.703240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.715249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.715263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.727281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.727297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.739312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.739327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.751351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.751369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.763375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.763386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.775407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.775417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.787445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.787458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.799471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.799480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 [2024-11-19 10:36:42.811503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.409 [2024-11-19 10:36:42.811513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1568585) - No such process 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1568585 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.409 delay0 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.409 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.410 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:35.668 [2024-11-19 10:36:42.927575] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:42.385 Initializing NVMe Controllers 00:09:42.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:42.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:42.385 Initialization complete. Launching workers. 00:09:42.385 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 473 00:09:42.385 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 760, failed to submit 33 00:09:42.385 success 561, unsuccessful 199, failed 0 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.385 rmmod nvme_tcp 00:09:42.385 rmmod nvme_fabrics 00:09:42.385 rmmod nvme_keyring 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1566727 ']' 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1566727 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1566727 ']' 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1566727 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:42.385 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566727 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566727' 00:09:42.386 killing process with pid 1566727 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1566727 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1566727 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.386 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.290 00:09:44.290 real 0m31.380s 00:09:44.290 user 0m41.932s 00:09:44.290 sys 0m11.055s 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.290 ************************************ 00:09:44.290 END TEST nvmf_zcopy 00:09:44.290 ************************************ 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.290 ************************************ 00:09:44.290 START TEST nvmf_nmic 00:09:44.290 ************************************ 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:44.290 * Looking for test storage... 00:09:44.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.290 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.549 --rc genhtml_branch_coverage=1 00:09:44.549 --rc genhtml_function_coverage=1 00:09:44.549 --rc genhtml_legend=1 00:09:44.549 --rc geninfo_all_blocks=1 00:09:44.549 --rc geninfo_unexecuted_blocks=1 00:09:44.549 00:09:44.549 ' 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.549 --rc genhtml_branch_coverage=1 00:09:44.549 --rc genhtml_function_coverage=1 00:09:44.549 --rc genhtml_legend=1 00:09:44.549 --rc geninfo_all_blocks=1 00:09:44.549 --rc geninfo_unexecuted_blocks=1 00:09:44.549 00:09:44.549 ' 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.549 --rc genhtml_branch_coverage=1 00:09:44.549 --rc genhtml_function_coverage=1 00:09:44.549 --rc genhtml_legend=1 00:09:44.549 --rc geninfo_all_blocks=1 00:09:44.549 --rc geninfo_unexecuted_blocks=1 00:09:44.549 00:09:44.549 ' 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.549 --rc genhtml_branch_coverage=1 00:09:44.549 --rc genhtml_function_coverage=1 00:09:44.549 --rc genhtml_legend=1 00:09:44.549 --rc geninfo_all_blocks=1 00:09:44.549 --rc geninfo_unexecuted_blocks=1 00:09:44.549 00:09:44.549 ' 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.549 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.550 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.120 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.120 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.120 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.121 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.121 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:09:51.121 00:09:51.121 --- 10.0.0.2 ping statistics --- 00:09:51.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.121 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:51.121 00:09:51.121 --- 10.0.0.1 ping statistics --- 00:09:51.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.121 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1573989 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1573989 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1573989 ']' 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.121 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.121 [2024-11-19 10:36:57.861740] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:09:51.121 [2024-11-19 10:36:57.861796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.121 [2024-11-19 10:36:57.942584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.121 [2024-11-19 10:36:57.986911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.121 [2024-11-19 10:36:57.986953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.121 [2024-11-19 10:36:57.986961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.121 [2024-11-19 10:36:57.986967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.121 [2024-11-19 10:36:57.986988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.121 [2024-11-19 10:36:57.988579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.121 [2024-11-19 10:36:57.988686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.121 [2024-11-19 10:36:57.988793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.121 [2024-11-19 10:36:57.988793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.121 [2024-11-19 10:36:58.126419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.121 Malloc0 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.121 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 [2024-11-19 10:36:58.195649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:51.122 test case1: single bdev can't be used in multiple subsystems 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 [2024-11-19 10:36:58.223558] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:51.122 [2024-11-19 10:36:58.223578] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:51.122 [2024-11-19 10:36:58.223585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.122 request: 00:09:51.122 { 00:09:51.122 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:51.122 "namespace": { 00:09:51.122 "bdev_name": "Malloc0", 00:09:51.122 "no_auto_visible": false 00:09:51.122 }, 00:09:51.122 "method": "nvmf_subsystem_add_ns", 00:09:51.122 "req_id": 1 00:09:51.122 } 00:09:51.122 Got JSON-RPC error response 00:09:51.122 response: 00:09:51.122 { 00:09:51.122 "code": -32602, 00:09:51.122 "message": "Invalid parameters" 00:09:51.122 } 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:51.122 Adding namespace failed - expected result. 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:51.122 test case2: host connect to nvmf target in multiple paths 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.122 [2024-11-19 10:36:58.235710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.122 10:36:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.055 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:53.428 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.428 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:53.428 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.428 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:53.428 10:37:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:55.326 10:37:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:55.326 10:37:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:55.326 10:37:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.326 10:37:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:55.326 10:37:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.326 10:37:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:55.326 10:37:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:55.326 [global] 00:09:55.326 thread=1 00:09:55.326 invalidate=1 00:09:55.326 rw=write 00:09:55.326 time_based=1 00:09:55.326 runtime=1 00:09:55.326 ioengine=libaio 00:09:55.326 direct=1 00:09:55.326 bs=4096 00:09:55.326 iodepth=1 00:09:55.326 norandommap=0 00:09:55.326 numjobs=1 00:09:55.326 00:09:55.326 verify_dump=1 00:09:55.326 verify_backlog=512 00:09:55.326 verify_state_save=0 00:09:55.326 do_verify=1 00:09:55.326 verify=crc32c-intel 00:09:55.326 [job0] 00:09:55.326 filename=/dev/nvme0n1 00:09:55.326 Could not set queue depth (nvme0n1) 00:09:55.585 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.585 fio-3.35 00:09:55.585 Starting 1 thread 00:09:56.517 00:09:56.517 job0: (groupid=0, jobs=1): err= 0: pid=1575055: Tue Nov 19 10:37:03 2024 00:09:56.517 read: IOPS=22, BW=90.4KiB/s (92.5kB/s)(92.0KiB/1018msec) 00:09:56.517 slat (nsec): min=9013, max=23588, avg=21879.22, stdev=2841.42 00:09:56.517 clat (usec): min=40695, max=41033, avg=40957.08, stdev=72.55 00:09:56.517 lat (usec): min=40704, max=41055, avg=40978.96, stdev=74.79 00:09:56.517 clat percentiles (usec): 00:09:56.517 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:56.517 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:56.517 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:56.517 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:56.517 | 99.99th=[41157] 00:09:56.517 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:56.517 slat (nsec): min=8175, max=35261, avg=10095.66, stdev=1415.42 00:09:56.517 clat (usec): min=119, max=374, avg=133.91, stdev=14.48 00:09:56.517 lat (usec): min=129, max=410, avg=144.01, stdev=15.33 00:09:56.517 clat percentiles (usec): 00:09:56.517 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 128], 00:09:56.517 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:09:56.517 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 147], 00:09:56.517 | 99.00th=[ 178], 99.50th=[ 215], 99.90th=[ 375], 99.95th=[ 375], 00:09:56.517 | 99.99th=[ 375] 00:09:56.517 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.517 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.517 lat (usec) : 250=95.51%, 500=0.19% 00:09:56.517 lat (msec) : 50=4.30% 00:09:56.517 cpu : usr=0.10%, sys=0.69%, ctx=535, majf=0, minf=1 00:09:56.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.517 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.517 00:09:56.517 Run status group 0 (all jobs): 00:09:56.517 READ: bw=90.4KiB/s (92.5kB/s), 90.4KiB/s-90.4KiB/s (92.5kB/s-92.5kB/s), io=92.0KiB (94.2kB), run=1018-1018msec 00:09:56.517 WRITE: bw=2012KiB/s (2060kB/s), 2012KiB/s-2012KiB/s (2060kB/s-2060kB/s), io=2048KiB (2097kB), run=1018-1018msec 00:09:56.517 00:09:56.517 Disk stats (read/write): 00:09:56.517 nvme0n1: ios=70/512, merge=0/0, ticks=845/68, in_queue=913, util=91.18% 00:09:56.517 10:37:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.776 rmmod nvme_tcp 00:09:56.776 rmmod nvme_fabrics 00:09:56.776 rmmod nvme_keyring 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1573989 ']' 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1573989 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1573989 ']' 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1573989 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573989 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573989' 00:09:56.776 killing process with pid 1573989 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1573989 00:09:56.776 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1573989 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.035 10:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.571 00:09:59.571 real 0m14.853s 00:09:59.571 user 0m32.276s 00:09:59.571 sys 0m5.271s 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.571 ************************************ 00:09:59.571 END TEST nvmf_nmic 00:09:59.571 ************************************ 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.571 ************************************ 00:09:59.571 START TEST nvmf_fio_target 00:09:59.571 ************************************ 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:59.571 * Looking for test storage... 00:09:59.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.571 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.572 --rc genhtml_branch_coverage=1 00:09:59.572 --rc genhtml_function_coverage=1 00:09:59.572 --rc genhtml_legend=1 00:09:59.572 --rc geninfo_all_blocks=1 00:09:59.572 --rc geninfo_unexecuted_blocks=1 00:09:59.572 00:09:59.572 ' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.572 --rc genhtml_branch_coverage=1 00:09:59.572 --rc genhtml_function_coverage=1 00:09:59.572 --rc genhtml_legend=1 00:09:59.572 --rc geninfo_all_blocks=1 00:09:59.572 --rc geninfo_unexecuted_blocks=1 00:09:59.572 00:09:59.572 ' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.572 --rc genhtml_branch_coverage=1 00:09:59.572 --rc genhtml_function_coverage=1 00:09:59.572 --rc genhtml_legend=1 00:09:59.572 --rc geninfo_all_blocks=1 00:09:59.572 --rc geninfo_unexecuted_blocks=1 00:09:59.572 00:09:59.572 ' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.572 --rc genhtml_branch_coverage=1 00:09:59.572 --rc genhtml_function_coverage=1 00:09:59.572 --rc genhtml_legend=1 00:09:59.572 --rc geninfo_all_blocks=1 00:09:59.572 --rc geninfo_unexecuted_blocks=1 00:09:59.572 00:09:59.572 ' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.572 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.573 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.143 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.143 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:06.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:06.144 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:06.144 Found net devices under 0000:86:00.0: cvl_0_0 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:06.144 Found net devices under 0000:86:00.1: cvl_0_1 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:10:06.144 00:10:06.144 --- 10.0.0.2 ping statistics --- 00:10:06.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.144 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:10:06.144 00:10:06.144 --- 10.0.0.1 ping statistics --- 00:10:06.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.144 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:06.144 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1578824 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1578824 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1578824 ']' 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 [2024-11-19 10:37:12.740725] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:06.145 [2024-11-19 10:37:12.740767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.145 [2024-11-19 10:37:12.819494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.145 [2024-11-19 10:37:12.860915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.145 [2024-11-19 10:37:12.860959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.145 [2024-11-19 10:37:12.860966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.145 [2024-11-19 10:37:12.860972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.145 [2024-11-19 10:37:12.860977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.145 [2024-11-19 10:37:12.862631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.145 [2024-11-19 10:37:12.862664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.145 [2024-11-19 10:37:12.862775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.145 [2024-11-19 10:37:12.862776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.145 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.145 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:06.145 [2024-11-19 10:37:13.172868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.145 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.145 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:06.145 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.404 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:06.404 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.664 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:06.664 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.664 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:06.664 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:06.923 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.182 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:07.182 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.441 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:07.441 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.441 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:07.441 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:07.701 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.960 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:07.960 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.218 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:08.218 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:08.477 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.477 [2024-11-19 10:37:15.859470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.477 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:08.736 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:08.995 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.372 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:10.372 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:10.372 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.372 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:10.372 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:10.373 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:12.285 10:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:12.285 10:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:12.285 10:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:12.285 10:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:12.285 10:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:12.285 10:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:12.285 10:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:12.285 [global] 00:10:12.285 thread=1 00:10:12.285 invalidate=1 00:10:12.285 rw=write 00:10:12.285 time_based=1 00:10:12.285 runtime=1 00:10:12.285 ioengine=libaio 00:10:12.285 direct=1 00:10:12.285 bs=4096 00:10:12.285 iodepth=1 00:10:12.285 norandommap=0 00:10:12.285 numjobs=1 00:10:12.285 00:10:12.285 verify_dump=1 00:10:12.285 verify_backlog=512 00:10:12.285 verify_state_save=0 00:10:12.285 do_verify=1 00:10:12.285 verify=crc32c-intel 00:10:12.285 [job0] 00:10:12.285 filename=/dev/nvme0n1 00:10:12.285 [job1] 00:10:12.285 filename=/dev/nvme0n2 00:10:12.285 [job2] 00:10:12.285 filename=/dev/nvme0n3 00:10:12.285 [job3] 00:10:12.285 filename=/dev/nvme0n4 00:10:12.285 Could not set queue depth (nvme0n1) 00:10:12.285 Could not set queue depth (nvme0n2) 00:10:12.285 Could not set queue depth (nvme0n3) 00:10:12.285 Could not set queue depth (nvme0n4) 00:10:12.542 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.542 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.543 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.543 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.543 fio-3.35 00:10:12.543 Starting 4 threads 00:10:13.914 00:10:13.914 job0: (groupid=0, jobs=1): err= 0: pid=1580173: Tue Nov 19 10:37:20 2024 00:10:13.914 read: IOPS=391, BW=1565KiB/s (1602kB/s)(1596KiB/1020msec) 00:10:13.914 slat (nsec): min=7823, max=37221, avg=10135.49, stdev=3727.96 00:10:13.914 clat (usec): min=173, max=41090, avg=2303.16, stdev=8896.17 00:10:13.914 lat (usec): min=182, max=41113, avg=2313.30, stdev=8898.59 00:10:13.914 clat percentiles (usec): 00:10:13.914 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:10:13.914 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 229], 60.00th=[ 251], 00:10:13.914 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 371], 95.00th=[40633], 00:10:13.914 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:13.914 | 99.99th=[41157] 00:10:13.914 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:13.914 slat (nsec): min=9942, max=51408, avg=11972.81, stdev=2730.79 00:10:13.914 clat (usec): min=134, max=402, avg=168.89, stdev=30.13 00:10:13.914 lat (usec): min=146, max=453, avg=180.86, stdev=31.20 00:10:13.914 clat percentiles (usec): 00:10:13.914 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:10:13.914 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:10:13.914 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 219], 95.00th=[ 239], 00:10:13.914 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 404], 99.95th=[ 404], 00:10:13.914 | 99.99th=[ 404] 00:10:13.914 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:13.914 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:13.915 lat (usec) : 250=80.68%, 500=16.25%, 750=0.55% 00:10:13.915 lat (msec) : 2=0.11%, 4=0.22%, 50=2.20% 00:10:13.915 cpu : usr=0.49%, sys=0.98%, ctx=912, majf=0, minf=2 00:10:13.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.915 issued rwts: total=399,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.915 job1: (groupid=0, jobs=1): err= 0: pid=1580174: Tue Nov 19 10:37:20 2024 00:10:13.915 read: IOPS=652, BW=2609KiB/s (2671kB/s)(2632KiB/1009msec) 00:10:13.915 slat (nsec): min=6378, max=23828, avg=7699.42, stdev=2679.37 00:10:13.915 clat (usec): min=186, max=42143, avg=1242.11, stdev=6279.88 00:10:13.915 lat (usec): min=192, max=42151, avg=1249.81, stdev=6280.78 00:10:13.915 clat percentiles (usec): 00:10:13.915 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 212], 00:10:13.915 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:10:13.915 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 318], 95.00th=[ 375], 00:10:13.915 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:13.915 | 99.99th=[42206] 00:10:13.915 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:10:13.915 slat (nsec): min=6586, max=43980, avg=12087.38, stdev=3244.95 00:10:13.915 clat (usec): min=109, max=3454, avg=165.59, stdev=110.29 00:10:13.915 lat (usec): min=116, max=3465, avg=177.68, stdev=111.01 00:10:13.915 clat percentiles (usec): 00:10:13.915 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:10:13.915 | 30.00th=[ 130], 40.00th=[ 137], 50.00th=[ 147], 60.00th=[ 178], 00:10:13.915 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 215], 95.00th=[ 239], 00:10:13.915 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 351], 99.95th=[ 3458], 00:10:13.915 | 99.99th=[ 3458] 00:10:13.915 bw ( KiB/s): min= 8192, max= 8192, per=68.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:13.915 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:13.915 lat (usec) : 250=86.86%, 500=12.01% 00:10:13.915 lat (msec) : 4=0.18%, 50=0.95% 00:10:13.915 cpu : usr=0.60%, sys=1.98%, ctx=1683, majf=0, minf=2 00:10:13.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.915 issued rwts: total=658,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.915 job2: (groupid=0, jobs=1): err= 0: pid=1580176: Tue Nov 19 10:37:20 2024 00:10:13.915 read: IOPS=22, BW=91.8KiB/s (94.0kB/s)(92.0KiB/1002msec) 00:10:13.915 slat (nsec): min=8988, max=25382, avg=21462.78, stdev=5442.08 00:10:13.915 clat (usec): min=339, max=41081, avg=39180.22, stdev=8467.79 00:10:13.915 lat (usec): min=364, max=41105, avg=39201.69, stdev=8466.97 00:10:13.915 clat percentiles (usec): 00:10:13.915 | 1.00th=[ 338], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:13.915 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:13.915 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:13.915 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:13.915 | 99.99th=[41157] 00:10:13.915 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:13.915 slat (nsec): min=10987, max=51979, avg=12465.14, stdev=2508.20 00:10:13.915 clat (usec): min=143, max=270, avg=177.46, stdev=14.78 00:10:13.915 lat (usec): min=154, max=308, avg=189.92, stdev=15.55 00:10:13.915 clat percentiles (usec): 00:10:13.915 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:10:13.915 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:10:13.915 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:10:13.915 | 99.00th=[ 225], 99.50th=[ 258], 99.90th=[ 269], 99.95th=[ 269], 00:10:13.915 | 99.99th=[ 269] 00:10:13.915 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:13.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:13.915 lat (usec) : 250=95.14%, 500=0.75% 00:10:13.915 lat (msec) : 50=4.11% 00:10:13.915 cpu : usr=0.30%, sys=1.00%, ctx=536, majf=0, minf=2 00:10:13.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.915 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.915 job3: (groupid=0, jobs=1): err= 0: pid=1580177: Tue Nov 19 10:37:20 2024 00:10:13.915 read: IOPS=628, BW=2515KiB/s (2575kB/s)(2540KiB/1010msec) 00:10:13.915 slat (nsec): min=6692, max=31126, avg=8150.82, stdev=2924.57 00:10:13.915 clat (usec): min=192, max=41498, avg=1287.76, stdev=6377.88 00:10:13.915 lat (usec): min=202, max=41506, avg=1295.91, stdev=6378.51 00:10:13.915 clat percentiles (usec): 00:10:13.915 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 225], 00:10:13.915 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:10:13.915 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 371], 95.00th=[ 449], 00:10:13.915 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:13.915 | 99.99th=[41681] 00:10:13.915 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:10:13.915 slat (nsec): min=9476, max=35788, avg=10569.47, stdev=1384.17 00:10:13.915 clat (usec): min=120, max=362, avg=167.91, stdev=38.10 00:10:13.915 lat (usec): min=130, max=373, avg=178.48, stdev=38.20 00:10:13.915 clat percentiles (usec): 00:10:13.915 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 135], 00:10:13.915 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 157], 60.00th=[ 182], 00:10:13.915 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 217], 95.00th=[ 235], 00:10:13.915 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 359], 99.95th=[ 363], 00:10:13.915 | 99.99th=[ 363] 00:10:13.915 bw ( KiB/s): min= 8192, max= 8192, per=68.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:13.915 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:13.915 lat (usec) : 250=82.16%, 500=16.34%, 750=0.54% 00:10:13.915 lat (msec) : 50=0.96% 00:10:13.915 cpu : usr=0.89%, sys=1.49%, ctx=1660, majf=0, minf=1 00:10:13.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.915 issued rwts: total=635,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.915 00:10:13.915 Run status group 0 (all jobs): 00:10:13.915 READ: bw=6725KiB/s (6887kB/s), 91.8KiB/s-2609KiB/s (94.0kB/s-2671kB/s), io=6860KiB (7025kB), run=1002-1020msec 00:10:13.915 WRITE: bw=11.8MiB/s (12.3MB/s), 2008KiB/s-4059KiB/s (2056kB/s-4157kB/s), io=12.0MiB (12.6MB), run=1002-1020msec 00:10:13.915 00:10:13.915 Disk stats (read/write): 00:10:13.915 nvme0n1: ios=418/512, merge=0/0, ticks=1579/79, in_queue=1658, util=86.17% 00:10:13.915 nvme0n2: ios=671/1024, merge=0/0, ticks=1553/163, in_queue=1716, util=90.15% 00:10:13.915 nvme0n3: ios=76/512, merge=0/0, ticks=1110/83, in_queue=1193, util=93.44% 00:10:13.915 nvme0n4: ios=684/1024, merge=0/0, ticks=726/166, in_queue=892, util=95.38% 00:10:13.915 10:37:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:13.915 [global] 00:10:13.915 thread=1 00:10:13.915 invalidate=1 00:10:13.916 rw=randwrite 00:10:13.916 time_based=1 00:10:13.916 runtime=1 00:10:13.916 ioengine=libaio 00:10:13.916 direct=1 00:10:13.916 bs=4096 00:10:13.916 iodepth=1 00:10:13.916 norandommap=0 00:10:13.916 numjobs=1 00:10:13.916 00:10:13.916 verify_dump=1 00:10:13.916 verify_backlog=512 00:10:13.916 verify_state_save=0 00:10:13.916 do_verify=1 00:10:13.916 verify=crc32c-intel 00:10:13.916 [job0] 00:10:13.916 filename=/dev/nvme0n1 00:10:13.916 [job1] 00:10:13.916 filename=/dev/nvme0n2 00:10:13.916 [job2] 00:10:13.916 filename=/dev/nvme0n3 00:10:13.916 [job3] 00:10:13.916 filename=/dev/nvme0n4 00:10:13.916 Could not set queue depth (nvme0n1) 00:10:13.916 Could not set queue depth (nvme0n2) 00:10:13.916 Could not set queue depth (nvme0n3) 00:10:13.916 Could not set queue depth (nvme0n4) 00:10:13.916 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.916 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.916 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.916 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.916 fio-3.35 00:10:13.916 Starting 4 threads 00:10:15.291 00:10:15.291 job0: (groupid=0, jobs=1): err= 0: pid=1580550: Tue Nov 19 10:37:22 2024 00:10:15.291 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:10:15.291 slat (nsec): min=9792, max=23463, avg=21646.82, stdev=3450.01 00:10:15.291 clat (usec): min=40757, max=41922, avg=41033.65, stdev=235.46 00:10:15.291 lat (usec): min=40780, max=41944, avg=41055.29, stdev=234.41 00:10:15.291 clat percentiles (usec): 00:10:15.291 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:15.291 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:15.291 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:15.291 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:15.291 | 99.99th=[41681] 00:10:15.291 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:15.291 slat (nsec): min=8963, max=40470, avg=9985.52, stdev=2387.21 00:10:15.291 clat (usec): min=128, max=390, avg=196.34, stdev=43.72 00:10:15.291 lat (usec): min=137, max=430, avg=206.33, stdev=44.08 00:10:15.291 clat percentiles (usec): 00:10:15.291 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:15.291 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 178], 60.00th=[ 239], 00:10:15.291 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:10:15.291 | 99.00th=[ 253], 99.50th=[ 255], 99.90th=[ 392], 99.95th=[ 392], 00:10:15.291 | 99.99th=[ 392] 00:10:15.291 bw ( KiB/s): min= 4096, max= 4096, per=22.44%, avg=4096.00, stdev= 0.00, samples=1 00:10:15.291 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:15.291 lat (usec) : 250=93.82%, 500=2.06% 00:10:15.291 lat (msec) : 50=4.12% 00:10:15.291 cpu : usr=0.10%, sys=0.59%, ctx=534, majf=0, minf=1 00:10:15.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.291 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.291 job1: (groupid=0, jobs=1): err= 0: pid=1580551: Tue Nov 19 10:37:22 2024 00:10:15.291 read: IOPS=579, BW=2318KiB/s (2373kB/s)(2320KiB/1001msec) 00:10:15.291 slat (nsec): min=6581, max=25619, avg=7748.67, stdev=2543.91 00:10:15.291 clat (usec): min=184, max=42005, avg=1414.92, stdev=6892.58 00:10:15.291 lat (usec): min=191, max=42015, avg=1422.67, stdev=6894.18 00:10:15.291 clat percentiles (usec): 00:10:15.291 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 204], 00:10:15.291 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:10:15.291 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 260], 00:10:15.291 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:15.291 | 99.99th=[42206] 00:10:15.291 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:15.291 slat (nsec): min=9226, max=43553, avg=10581.18, stdev=1760.37 00:10:15.291 clat (usec): min=120, max=379, avg=157.89, stdev=23.44 00:10:15.291 lat (usec): min=130, max=422, avg=168.47, stdev=24.01 00:10:15.291 clat percentiles (usec): 00:10:15.291 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 141], 00:10:15.292 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 159], 00:10:15.292 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 202], 00:10:15.292 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 273], 99.95th=[ 379], 00:10:15.292 | 99.99th=[ 379] 00:10:15.292 bw ( KiB/s): min= 8192, max= 8192, per=44.89%, avg=8192.00, stdev= 0.00, samples=1 00:10:15.292 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:15.292 lat (usec) : 250=96.70%, 500=2.24% 00:10:15.292 lat (msec) : 50=1.06% 00:10:15.292 cpu : usr=0.50%, sys=1.70%, ctx=1605, majf=0, minf=1 00:10:15.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.292 issued rwts: total=580,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.292 job2: (groupid=0, jobs=1): err= 0: pid=1580552: Tue Nov 19 10:37:22 2024 00:10:15.292 read: IOPS=2234, BW=8939KiB/s (9154kB/s)(8948KiB/1001msec) 00:10:15.292 slat (nsec): min=8320, max=51402, avg=9385.02, stdev=1583.05 00:10:15.292 clat (usec): min=182, max=462, avg=230.21, stdev=20.19 00:10:15.292 lat (usec): min=192, max=472, avg=239.60, stdev=20.21 00:10:15.292 clat percentiles (usec): 00:10:15.292 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:10:15.292 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:15.292 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 260], 00:10:15.292 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 359], 99.95th=[ 445], 00:10:15.292 | 99.99th=[ 461] 00:10:15.292 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:15.292 slat (nsec): min=11003, max=65167, avg=12839.56, stdev=2015.46 00:10:15.292 clat (usec): min=91, max=753, avg=162.26, stdev=28.58 00:10:15.292 lat (usec): min=139, max=766, avg=175.10, stdev=29.02 00:10:15.292 clat percentiles (usec): 00:10:15.292 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:15.292 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 161], 00:10:15.292 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 204], 00:10:15.292 | 99.00th=[ 262], 99.50th=[ 289], 99.90th=[ 545], 99.95th=[ 553], 00:10:15.292 | 99.99th=[ 750] 00:10:15.292 bw ( KiB/s): min= 9968, max= 9968, per=54.62%, avg=9968.00, stdev= 0.00, samples=1 00:10:15.292 iops : min= 2492, max= 2492, avg=2492.00, stdev= 0.00, samples=1 00:10:15.292 lat (usec) : 100=0.02%, 250=91.52%, 500=8.40%, 750=0.04%, 1000=0.02% 00:10:15.292 cpu : usr=4.60%, sys=8.20%, ctx=4797, majf=0, minf=1 00:10:15.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.292 issued rwts: total=2237,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.292 job3: (groupid=0, jobs=1): err= 0: pid=1580553: Tue Nov 19 10:37:22 2024 00:10:15.292 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:10:15.292 slat (nsec): min=10217, max=25292, avg=23278.95, stdev=3074.82 00:10:15.292 clat (usec): min=40829, max=41081, avg=40967.85, stdev=63.98 00:10:15.292 lat (usec): min=40854, max=41104, avg=40991.13, stdev=63.25 00:10:15.292 clat percentiles (usec): 00:10:15.292 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:15.292 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:15.292 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:15.292 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:15.292 | 99.99th=[41157] 00:10:15.292 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:15.292 slat (nsec): min=10701, max=73105, avg=12277.65, stdev=3141.87 00:10:15.292 clat (usec): min=147, max=443, avg=179.33, stdev=22.81 00:10:15.292 lat (usec): min=158, max=457, avg=191.60, stdev=23.64 00:10:15.292 clat percentiles (usec): 00:10:15.292 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:10:15.292 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:15.292 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 223], 00:10:15.292 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 445], 99.95th=[ 445], 00:10:15.292 | 99.99th=[ 445] 00:10:15.292 bw ( KiB/s): min= 4096, max= 4096, per=22.44%, avg=4096.00, stdev= 0.00, samples=1 00:10:15.292 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:15.292 lat (usec) : 250=94.76%, 500=1.12% 00:10:15.292 lat (msec) : 50=4.12% 00:10:15.292 cpu : usr=0.30%, sys=1.10%, ctx=535, majf=0, minf=1 00:10:15.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.292 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.292 00:10:15.292 Run status group 0 (all jobs): 00:10:15.292 READ: bw=11.1MiB/s (11.6MB/s), 87.1KiB/s-8939KiB/s (89.2kB/s-9154kB/s), io=11.2MiB (11.7MB), run=1001-1010msec 00:10:15.292 WRITE: bw=17.8MiB/s (18.7MB/s), 2028KiB/s-9.99MiB/s (2076kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1010msec 00:10:15.292 00:10:15.292 Disk stats (read/write): 00:10:15.292 nvme0n1: ios=53/512, merge=0/0, ticks=791/97, in_queue=888, util=88.28% 00:10:15.292 nvme0n2: ios=626/1024, merge=0/0, ticks=840/159, in_queue=999, util=98.58% 00:10:15.292 nvme0n3: ios=1969/2048, merge=0/0, ticks=440/318, in_queue=758, util=89.06% 00:10:15.292 nvme0n4: ios=76/512, merge=0/0, ticks=1750/85, in_queue=1835, util=98.32% 00:10:15.292 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:15.292 [global] 00:10:15.292 thread=1 00:10:15.292 invalidate=1 00:10:15.292 rw=write 00:10:15.292 time_based=1 00:10:15.292 runtime=1 00:10:15.292 ioengine=libaio 00:10:15.292 direct=1 00:10:15.292 bs=4096 00:10:15.292 iodepth=128 00:10:15.292 norandommap=0 00:10:15.292 numjobs=1 00:10:15.292 00:10:15.292 verify_dump=1 00:10:15.292 verify_backlog=512 00:10:15.292 verify_state_save=0 00:10:15.292 do_verify=1 00:10:15.292 verify=crc32c-intel 00:10:15.292 [job0] 00:10:15.292 filename=/dev/nvme0n1 00:10:15.292 [job1] 00:10:15.292 filename=/dev/nvme0n2 00:10:15.292 [job2] 00:10:15.292 filename=/dev/nvme0n3 00:10:15.292 [job3] 00:10:15.292 filename=/dev/nvme0n4 00:10:15.292 Could not set queue depth (nvme0n1) 00:10:15.292 Could not set queue depth (nvme0n2) 00:10:15.292 Could not set queue depth (nvme0n3) 00:10:15.292 Could not set queue depth (nvme0n4) 00:10:15.550 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.550 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.550 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.550 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.550 fio-3.35 00:10:15.550 Starting 4 threads 00:10:16.924 00:10:16.924 job0: (groupid=0, jobs=1): err= 0: pid=1580921: Tue Nov 19 10:37:24 2024 00:10:16.924 read: IOPS=4836, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1005msec) 00:10:16.924 slat (nsec): min=1087, max=13767k, avg=105859.40, stdev=685167.21 00:10:16.924 clat (usec): min=3678, max=61308, avg=13157.51, stdev=7321.18 00:10:16.924 lat (usec): min=3839, max=61314, avg=13263.37, stdev=7380.46 00:10:16.924 clat percentiles (usec): 00:10:16.924 | 1.00th=[ 6456], 5.00th=[ 7832], 10.00th=[ 8979], 20.00th=[ 9765], 00:10:16.924 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11469], 00:10:16.924 | 70.00th=[11863], 80.00th=[14222], 90.00th=[18482], 95.00th=[29492], 00:10:16.924 | 99.00th=[42206], 99.50th=[53740], 99.90th=[61080], 99.95th=[61080], 00:10:16.924 | 99.99th=[61080] 00:10:16.924 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:16.924 slat (nsec): min=1978, max=10500k, avg=89658.15, stdev=491155.81 00:10:16.924 clat (usec): min=3918, max=61305, avg=12335.13, stdev=5896.09 00:10:16.924 lat (usec): min=3928, max=61312, avg=12424.79, stdev=5923.85 00:10:16.924 clat percentiles (usec): 00:10:16.924 | 1.00th=[ 6849], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[10028], 00:10:16.924 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:10:16.924 | 70.00th=[11600], 80.00th=[12780], 90.00th=[16712], 95.00th=[17957], 00:10:16.924 | 99.00th=[44303], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:10:16.924 | 99.99th=[61080] 00:10:16.924 bw ( KiB/s): min=20480, max=20480, per=30.45%, avg=20480.00, stdev= 0.00, samples=2 00:10:16.924 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:16.924 lat (msec) : 4=0.37%, 10=19.91%, 20=73.27%, 50=5.77%, 100=0.68% 00:10:16.924 cpu : usr=3.39%, sys=5.28%, ctx=507, majf=0, minf=2 00:10:16.924 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:16.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.924 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.924 job1: (groupid=0, jobs=1): err= 0: pid=1580922: Tue Nov 19 10:37:24 2024 00:10:16.924 read: IOPS=3127, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1005msec) 00:10:16.924 slat (nsec): min=1130, max=23278k, avg=151659.79, stdev=1132015.91 00:10:16.924 clat (usec): min=1669, max=89769, avg=17879.55, stdev=14144.81 00:10:16.924 lat (usec): min=8148, max=89779, avg=18031.21, stdev=14238.78 00:10:16.924 clat percentiles (usec): 00:10:16.924 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10421], 20.00th=[11731], 00:10:16.924 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:10:16.924 | 70.00th=[14222], 80.00th=[16450], 90.00th=[30540], 95.00th=[58983], 00:10:16.924 | 99.00th=[79168], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:10:16.924 | 99.99th=[89654] 00:10:16.924 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:10:16.924 slat (usec): min=2, max=14327, avg=141.58, stdev=830.01 00:10:16.924 clat (usec): min=4104, max=93194, avg=19546.60, stdev=12878.26 00:10:16.924 lat (usec): min=4112, max=93198, avg=19688.18, stdev=12934.58 00:10:16.924 clat percentiles (usec): 00:10:16.924 | 1.00th=[ 8094], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12518], 00:10:16.924 | 30.00th=[13173], 40.00th=[13698], 50.00th=[16319], 60.00th=[18220], 00:10:16.924 | 70.00th=[21103], 80.00th=[22676], 90.00th=[27395], 95.00th=[45876], 00:10:16.924 | 99.00th=[91751], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:10:16.924 | 99.99th=[92799] 00:10:16.924 bw ( KiB/s): min=13696, max=14520, per=20.98%, avg=14108.00, stdev=582.66, samples=2 00:10:16.924 iops : min= 3424, max= 3630, avg=3527.00, stdev=145.66, samples=2 00:10:16.924 lat (msec) : 2=0.01%, 10=3.81%, 20=71.61%, 50=19.86%, 100=4.71% 00:10:16.924 cpu : usr=2.09%, sys=3.98%, ctx=347, majf=0, minf=1 00:10:16.924 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:16.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.924 issued rwts: total=3143,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.924 job2: (groupid=0, jobs=1): err= 0: pid=1580925: Tue Nov 19 10:37:24 2024 00:10:16.924 read: IOPS=3561, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:10:16.924 slat (nsec): min=1170, max=15353k, avg=155306.46, stdev=1082041.51 00:10:16.924 clat (usec): min=1133, max=61488, avg=19607.08, stdev=15192.95 00:10:16.924 lat (usec): min=3242, max=62189, avg=19762.39, stdev=15279.99 00:10:16.924 clat percentiles (usec): 00:10:16.924 | 1.00th=[ 3425], 5.00th=[ 6718], 10.00th=[ 8291], 20.00th=[ 9372], 00:10:16.924 | 30.00th=[10028], 40.00th=[11207], 50.00th=[11994], 60.00th=[12780], 00:10:16.924 | 70.00th=[17695], 80.00th=[36439], 90.00th=[48497], 95.00th=[52167], 00:10:16.924 | 99.00th=[55837], 99.50th=[58459], 99.90th=[61604], 99.95th=[61604], 00:10:16.924 | 99.99th=[61604] 00:10:16.924 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:16.924 slat (usec): min=2, max=8194, avg=118.49, stdev=537.65 00:10:16.924 clat (usec): min=2456, max=59714, avg=15832.64, stdev=9136.02 00:10:16.924 lat (usec): min=2464, max=59717, avg=15951.12, stdev=9199.28 00:10:16.924 clat percentiles (usec): 00:10:16.924 | 1.00th=[ 4948], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9634], 00:10:16.924 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[12387], 60.00th=[13304], 00:10:16.924 | 70.00th=[17957], 80.00th=[22414], 90.00th=[26346], 95.00th=[34341], 00:10:16.924 | 99.00th=[49546], 99.50th=[52167], 99.90th=[59507], 99.95th=[59507], 00:10:16.924 | 99.99th=[59507] 00:10:16.924 bw ( KiB/s): min=13248, max=15424, per=21.32%, avg=14336.00, stdev=1538.66, samples=2 00:10:16.924 iops : min= 3312, max= 3856, avg=3584.00, stdev=384.67, samples=2 00:10:16.924 lat (msec) : 2=0.01%, 4=0.85%, 10=33.76%, 20=37.91%, 50=22.93% 00:10:16.924 lat (msec) : 100=4.54% 00:10:16.924 cpu : usr=2.39%, sys=3.49%, ctx=439, majf=0, minf=1 00:10:16.924 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:16.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.924 issued rwts: total=3576,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.924 job3: (groupid=0, jobs=1): err= 0: pid=1580926: Tue Nov 19 10:37:24 2024 00:10:16.924 read: IOPS=4139, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1005msec) 00:10:16.924 slat (nsec): min=1553, max=12038k, avg=99767.10, stdev=566517.64 00:10:16.924 clat (usec): min=2154, max=31109, avg=12700.61, stdev=2934.65 00:10:16.924 lat (usec): min=6820, max=33898, avg=12800.38, stdev=2956.07 00:10:16.924 clat percentiles (usec): 00:10:16.924 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11076], 00:10:16.924 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12256], 00:10:16.924 | 70.00th=[13304], 80.00th=[14484], 90.00th=[15664], 95.00th=[17433], 00:10:16.924 | 99.00th=[25035], 99.50th=[25297], 99.90th=[31065], 99.95th=[31065], 00:10:16.924 | 99.99th=[31065] 00:10:16.924 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:10:16.924 slat (usec): min=2, max=19111, avg=119.08, stdev=712.26 00:10:16.924 clat (usec): min=6821, max=49884, avg=16083.29, stdev=7546.17 00:10:16.924 lat (usec): min=6837, max=49916, avg=16202.37, stdev=7607.26 00:10:16.924 clat percentiles (usec): 00:10:16.924 | 1.00th=[ 7963], 5.00th=[10028], 10.00th=[10945], 20.00th=[11469], 00:10:16.924 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12256], 60.00th=[14353], 00:10:16.924 | 70.00th=[16319], 80.00th=[20579], 90.00th=[28967], 95.00th=[34866], 00:10:16.924 | 99.00th=[40109], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:10:16.924 | 99.99th=[50070] 00:10:16.924 bw ( KiB/s): min=15872, max=20480, per=27.03%, avg=18176.00, stdev=3258.35, samples=2 00:10:16.924 iops : min= 3968, max= 5120, avg=4544.00, stdev=814.59, samples=2 00:10:16.924 lat (msec) : 4=0.01%, 10=7.66%, 20=79.94%, 50=12.39% 00:10:16.924 cpu : usr=2.99%, sys=6.67%, ctx=485, majf=0, minf=1 00:10:16.924 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:16.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.924 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.924 00:10:16.924 Run status group 0 (all jobs): 00:10:16.924 READ: bw=61.2MiB/s (64.1MB/s), 12.2MiB/s-18.9MiB/s (12.8MB/s-19.8MB/s), io=61.5MiB (64.5MB), run=1004-1005msec 00:10:16.924 WRITE: bw=65.7MiB/s (68.9MB/s), 13.9MiB/s-19.9MiB/s (14.6MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1004-1005msec 00:10:16.924 00:10:16.924 Disk stats (read/write): 00:10:16.924 nvme0n1: ios=4127/4439, merge=0/0, ticks=19045/19343, in_queue=38388, util=96.29% 00:10:16.924 nvme0n2: ios=2959/3072, merge=0/0, ticks=21760/22771, in_queue=44531, util=97.23% 00:10:16.924 nvme0n3: ios=2233/2560, merge=0/0, ticks=21343/23048, in_queue=44391, util=97.73% 00:10:16.924 nvme0n4: ios=3624/4079, merge=0/0, ticks=22354/27109, in_queue=49463, util=99.78% 00:10:16.925 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:16.925 [global] 00:10:16.925 thread=1 00:10:16.925 invalidate=1 00:10:16.925 rw=randwrite 00:10:16.925 time_based=1 00:10:16.925 runtime=1 00:10:16.925 ioengine=libaio 00:10:16.925 direct=1 00:10:16.925 bs=4096 00:10:16.925 iodepth=128 00:10:16.925 norandommap=0 00:10:16.925 numjobs=1 00:10:16.925 00:10:16.925 verify_dump=1 00:10:16.925 verify_backlog=512 00:10:16.925 verify_state_save=0 00:10:16.925 do_verify=1 00:10:16.925 verify=crc32c-intel 00:10:16.925 [job0] 00:10:16.925 filename=/dev/nvme0n1 00:10:16.925 [job1] 00:10:16.925 filename=/dev/nvme0n2 00:10:16.925 [job2] 00:10:16.925 filename=/dev/nvme0n3 00:10:16.925 [job3] 00:10:16.925 filename=/dev/nvme0n4 00:10:16.925 Could not set queue depth (nvme0n1) 00:10:16.925 Could not set queue depth (nvme0n2) 00:10:16.925 Could not set queue depth (nvme0n3) 00:10:16.925 Could not set queue depth (nvme0n4) 00:10:17.182 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.182 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.182 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.182 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.182 fio-3.35 00:10:17.182 Starting 4 threads 00:10:18.555 00:10:18.555 job0: (groupid=0, jobs=1): err= 0: pid=1581299: Tue Nov 19 10:37:25 2024 00:10:18.555 read: IOPS=4500, BW=17.6MiB/s (18.4MB/s)(18.5MiB/1052msec) 00:10:18.555 slat (nsec): min=1095, max=12634k, avg=100020.51, stdev=704086.92 00:10:18.555 clat (usec): min=3866, max=63621, avg=14537.47, stdev=8516.91 00:10:18.555 lat (usec): min=3872, max=63625, avg=14637.49, stdev=8540.44 00:10:18.555 clat percentiles (usec): 00:10:18.555 | 1.00th=[ 7832], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10945], 00:10:18.555 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:10:18.555 | 70.00th=[13304], 80.00th=[14222], 90.00th=[19006], 95.00th=[28705], 00:10:18.555 | 99.00th=[59507], 99.50th=[61604], 99.90th=[63701], 99.95th=[63701], 00:10:18.555 | 99.99th=[63701] 00:10:18.555 write: IOPS=4866, BW=19.0MiB/s (19.9MB/s)(20.0MiB/1052msec); 0 zone resets 00:10:18.555 slat (nsec): min=1812, max=17659k, avg=73895.49, stdev=596860.11 00:10:18.555 clat (usec): min=1077, max=68046, avg=12645.20, stdev=6976.96 00:10:18.555 lat (usec): min=1087, max=68051, avg=12719.09, stdev=7010.73 00:10:18.555 clat percentiles (usec): 00:10:18.555 | 1.00th=[ 3621], 5.00th=[ 6849], 10.00th=[ 7701], 20.00th=[ 9110], 00:10:18.555 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:10:18.555 | 70.00th=[11994], 80.00th=[14353], 90.00th=[21103], 95.00th=[22676], 00:10:18.555 | 99.00th=[49021], 99.50th=[57934], 99.90th=[67634], 99.95th=[67634], 00:10:18.555 | 99.99th=[67634] 00:10:18.555 bw ( KiB/s): min=20288, max=20656, per=28.86%, avg=20472.00, stdev=260.22, samples=2 00:10:18.555 iops : min= 5072, max= 5164, avg=5118.00, stdev=65.05, samples=2 00:10:18.555 lat (msec) : 2=0.07%, 4=0.61%, 10=19.72%, 20=69.60%, 50=8.23% 00:10:18.555 lat (msec) : 100=1.78% 00:10:18.555 cpu : usr=3.62%, sys=5.33%, ctx=357, majf=0, minf=1 00:10:18.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:18.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.555 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.555 job1: (groupid=0, jobs=1): err= 0: pid=1581302: Tue Nov 19 10:37:25 2024 00:10:18.555 read: IOPS=5218, BW=20.4MiB/s (21.4MB/s)(21.4MiB/1052msec) 00:10:18.555 slat (nsec): min=1403, max=17865k, avg=103135.07, stdev=783581.41 00:10:18.555 clat (usec): min=3713, max=71278, avg=13652.07, stdev=8282.06 00:10:18.555 lat (usec): min=3719, max=89143, avg=13755.21, stdev=8344.40 00:10:18.555 clat percentiles (usec): 00:10:18.555 | 1.00th=[ 4752], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9896], 00:10:18.555 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10945], 60.00th=[12256], 00:10:18.555 | 70.00th=[13304], 80.00th=[15664], 90.00th=[18744], 95.00th=[25822], 00:10:18.555 | 99.00th=[57934], 99.50th=[57934], 99.90th=[70779], 99.95th=[70779], 00:10:18.555 | 99.99th=[70779] 00:10:18.555 write: IOPS=5353, BW=20.9MiB/s (21.9MB/s)(22.0MiB/1052msec); 0 zone resets 00:10:18.555 slat (usec): min=2, max=15196, avg=73.05, stdev=389.95 00:10:18.555 clat (usec): min=1444, max=26991, avg=10339.37, stdev=2559.97 00:10:18.555 lat (usec): min=1455, max=26994, avg=10412.42, stdev=2582.82 00:10:18.555 clat percentiles (usec): 00:10:18.555 | 1.00th=[ 2966], 5.00th=[ 5145], 10.00th=[ 6980], 20.00th=[ 9241], 00:10:18.555 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:10:18.555 | 70.00th=[11207], 80.00th=[11994], 90.00th=[12387], 95.00th=[12518], 00:10:18.555 | 99.00th=[19268], 99.50th=[19268], 99.90th=[21890], 99.95th=[22676], 00:10:18.555 | 99.99th=[26870] 00:10:18.555 bw ( KiB/s): min=20480, max=24576, per=31.76%, avg=22528.00, stdev=2896.31, samples=2 00:10:18.555 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:18.555 lat (msec) : 2=0.17%, 4=1.20%, 10=25.18%, 20=69.70%, 50=2.62% 00:10:18.555 lat (msec) : 100=1.13% 00:10:18.555 cpu : usr=3.24%, sys=5.90%, ctx=674, majf=0, minf=1 00:10:18.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:18.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.555 issued rwts: total=5490,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.555 job2: (groupid=0, jobs=1): err= 0: pid=1581303: Tue Nov 19 10:37:25 2024 00:10:18.555 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:10:18.555 slat (nsec): min=1451, max=27360k, avg=117001.02, stdev=909538.98 00:10:18.555 clat (usec): min=4061, max=53143, avg=14403.12, stdev=6137.14 00:10:18.555 lat (usec): min=4067, max=53155, avg=14520.12, stdev=6194.94 00:10:18.555 clat percentiles (usec): 00:10:18.555 | 1.00th=[ 5604], 5.00th=[10421], 10.00th=[11207], 20.00th=[11469], 00:10:18.555 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[13304], 00:10:18.555 | 70.00th=[13566], 80.00th=[16319], 90.00th=[21627], 95.00th=[23462], 00:10:18.555 | 99.00th=[44827], 99.50th=[46924], 99.90th=[48497], 99.95th=[53216], 00:10:18.555 | 99.99th=[53216] 00:10:18.555 write: IOPS=4777, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1011msec); 0 zone resets 00:10:18.555 slat (usec): min=2, max=41148, avg=89.44, stdev=776.29 00:10:18.555 clat (usec): min=1012, max=55132, avg=12766.02, stdev=6651.42 00:10:18.555 lat (usec): min=1019, max=55151, avg=12855.46, stdev=6692.28 00:10:18.555 clat percentiles (usec): 00:10:18.555 | 1.00th=[ 3687], 5.00th=[ 5997], 10.00th=[ 8291], 20.00th=[10814], 00:10:18.555 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:10:18.555 | 70.00th=[12518], 80.00th=[13566], 90.00th=[14091], 95.00th=[19792], 00:10:18.555 | 99.00th=[50070], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:10:18.555 | 99.99th=[55313] 00:10:18.555 bw ( KiB/s): min=17040, max=20584, per=26.52%, avg=18812.00, stdev=2505.99, samples=2 00:10:18.555 iops : min= 4260, max= 5146, avg=4703.00, stdev=626.50, samples=2 00:10:18.555 lat (msec) : 2=0.05%, 4=0.83%, 10=9.79%, 20=81.00%, 50=7.77% 00:10:18.555 lat (msec) : 100=0.56% 00:10:18.555 cpu : usr=3.37%, sys=5.45%, ctx=512, majf=0, minf=1 00:10:18.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:18.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.555 issued rwts: total=4608,4830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.555 job3: (groupid=0, jobs=1): err= 0: pid=1581304: Tue Nov 19 10:37:25 2024 00:10:18.555 read: IOPS=2710, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1010msec) 00:10:18.555 slat (nsec): min=1291, max=28407k, avg=217025.25, stdev=1693339.66 00:10:18.555 clat (usec): min=1426, max=96304, avg=27356.80, stdev=22446.18 00:10:18.555 lat (usec): min=1434, max=96309, avg=27573.83, stdev=22567.36 00:10:18.555 clat percentiles (usec): 00:10:18.555 | 1.00th=[ 1483], 5.00th=[10552], 10.00th=[12387], 20.00th=[13304], 00:10:18.555 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15139], 60.00th=[18482], 00:10:18.555 | 70.00th=[25297], 80.00th=[47449], 90.00th=[67634], 95.00th=[72877], 00:10:18.555 | 99.00th=[95945], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:10:18.556 | 99.99th=[95945] 00:10:18.556 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:10:18.556 slat (nsec): min=1841, max=22868k, avg=119932.10, stdev=727915.15 00:10:18.556 clat (usec): min=2121, max=64356, avg=17140.81, stdev=9660.44 00:10:18.556 lat (usec): min=2129, max=64366, avg=17260.75, stdev=9692.55 00:10:18.556 clat percentiles (usec): 00:10:18.556 | 1.00th=[ 4948], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11994], 00:10:18.556 | 30.00th=[12911], 40.00th=[13566], 50.00th=[13829], 60.00th=[14353], 00:10:18.556 | 70.00th=[15401], 80.00th=[22676], 90.00th=[25035], 95.00th=[31327], 00:10:18.556 | 99.00th=[61604], 99.50th=[63177], 99.90th=[64226], 99.95th=[64226], 00:10:18.556 | 99.99th=[64226] 00:10:18.556 bw ( KiB/s): min= 8192, max=16384, per=17.32%, avg=12288.00, stdev=5792.62, samples=2 00:10:18.556 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:18.556 lat (msec) : 2=0.69%, 4=0.28%, 10=3.99%, 20=61.76%, 50=22.56% 00:10:18.556 lat (msec) : 100=10.72% 00:10:18.556 cpu : usr=1.49%, sys=2.78%, ctx=320, majf=0, minf=1 00:10:18.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:18.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.556 issued rwts: total=2738,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.556 00:10:18.556 Run status group 0 (all jobs): 00:10:18.556 READ: bw=65.2MiB/s (68.4MB/s), 10.6MiB/s-20.4MiB/s (11.1MB/s-21.4MB/s), io=68.6MiB (72.0MB), run=1010-1052msec 00:10:18.556 WRITE: bw=69.3MiB/s (72.6MB/s), 11.9MiB/s-20.9MiB/s (12.5MB/s-21.9MB/s), io=72.9MiB (76.4MB), run=1010-1052msec 00:10:18.556 00:10:18.556 Disk stats (read/write): 00:10:18.556 nvme0n1: ios=3991/4096, merge=0/0, ticks=44059/47303, in_queue=91362, util=96.39% 00:10:18.556 nvme0n2: ios=4646/5119, merge=0/0, ticks=53159/50478, in_queue=103637, util=99.39% 00:10:18.556 nvme0n3: ios=3728/4096, merge=0/0, ticks=53711/47044, in_queue=100755, util=99.48% 00:10:18.556 nvme0n4: ios=2067/2560, merge=0/0, ticks=20479/13070, in_queue=33549, util=99.26% 00:10:18.556 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:18.556 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1581534 00:10:18.556 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:18.556 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:18.556 [global] 00:10:18.556 thread=1 00:10:18.556 invalidate=1 00:10:18.556 rw=read 00:10:18.556 time_based=1 00:10:18.556 runtime=10 00:10:18.556 ioengine=libaio 00:10:18.556 direct=1 00:10:18.556 bs=4096 00:10:18.556 iodepth=1 00:10:18.556 norandommap=1 00:10:18.556 numjobs=1 00:10:18.556 00:10:18.556 [job0] 00:10:18.556 filename=/dev/nvme0n1 00:10:18.556 [job1] 00:10:18.556 filename=/dev/nvme0n2 00:10:18.556 [job2] 00:10:18.556 filename=/dev/nvme0n3 00:10:18.556 [job3] 00:10:18.556 filename=/dev/nvme0n4 00:10:18.556 Could not set queue depth (nvme0n1) 00:10:18.556 Could not set queue depth (nvme0n2) 00:10:18.556 Could not set queue depth (nvme0n3) 00:10:18.556 Could not set queue depth (nvme0n4) 00:10:18.813 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.813 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.813 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.813 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.813 fio-3.35 00:10:18.813 Starting 4 threads 00:10:21.338 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:21.596 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:21.596 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=15605760, buflen=4096 00:10:21.596 fio: pid=1581760, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:21.856 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=290816, buflen=4096 00:10:21.856 fio: pid=1581754, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:21.856 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.856 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:22.117 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51216384, buflen=4096 00:10:22.117 fio: pid=1581717, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:22.117 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.117 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:22.374 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.374 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:22.374 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=815104, buflen=4096 00:10:22.374 fio: pid=1581734, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:22.374 00:10:22.374 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1581717: Tue Nov 19 10:37:29 2024 00:10:22.374 read: IOPS=3979, BW=15.5MiB/s (16.3MB/s)(48.8MiB/3142msec) 00:10:22.374 slat (usec): min=3, max=11625, avg= 8.41, stdev=119.02 00:10:22.374 clat (usec): min=155, max=490, avg=240.12, stdev=26.01 00:10:22.374 lat (usec): min=162, max=12009, avg=248.52, stdev=123.52 00:10:22.374 clat percentiles (usec): 00:10:22.374 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 233], 00:10:22.374 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:10:22.374 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:10:22.374 | 99.00th=[ 277], 99.50th=[ 277], 99.90th=[ 383], 99.95th=[ 400], 00:10:22.374 | 99.99th=[ 478] 00:10:22.374 bw ( KiB/s): min=15520, max=17702, per=81.85%, avg=15975.67, stdev=859.78, samples=6 00:10:22.374 iops : min= 3880, max= 4425, avg=3993.83, stdev=214.74, samples=6 00:10:22.374 lat (usec) : 250=58.73%, 500=41.26% 00:10:22.374 cpu : usr=0.96%, sys=3.34%, ctx=12509, majf=0, minf=1 00:10:22.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.374 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.374 issued rwts: total=12505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.374 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1581734: Tue Nov 19 10:37:29 2024 00:10:22.374 read: IOPS=58, BW=234KiB/s (240kB/s)(796KiB/3399msec) 00:10:22.374 slat (usec): min=6, max=11907, avg=101.69, stdev=925.07 00:10:22.374 clat (usec): min=162, max=44787, avg=16838.45, stdev=20138.07 00:10:22.374 lat (usec): min=170, max=53985, avg=16940.54, stdev=20214.99 00:10:22.374 clat percentiles (usec): 00:10:22.374 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 186], 00:10:22.374 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 219], 60.00th=[40633], 00:10:22.374 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:22.374 | 99.00th=[42206], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:10:22.374 | 99.99th=[44827] 00:10:22.374 bw ( KiB/s): min= 96, max= 560, per=1.05%, avg=204.00, stdev=176.92, samples=6 00:10:22.374 iops : min= 24, max= 140, avg=51.00, stdev=44.23, samples=6 00:10:22.374 lat (usec) : 250=57.50%, 500=1.50% 00:10:22.374 lat (msec) : 50=40.50% 00:10:22.375 cpu : usr=0.15%, sys=0.00%, ctx=206, majf=0, minf=2 00:10:22.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.375 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.375 issued rwts: total=200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.375 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1581754: Tue Nov 19 10:37:29 2024 00:10:22.375 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(284KiB/2938msec) 00:10:22.375 slat (nsec): min=11203, max=74919, avg=20620.33, stdev=8183.27 00:10:22.375 clat (usec): min=40807, max=42023, avg=41057.82, stdev=286.44 00:10:22.375 lat (usec): min=40829, max=42035, avg=41078.54, stdev=284.82 00:10:22.375 clat percentiles (usec): 00:10:22.375 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:22.375 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:22.375 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:10:22.375 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:22.375 | 99.99th=[42206] 00:10:22.375 bw ( KiB/s): min= 96, max= 96, per=0.49%, avg=96.00, stdev= 0.00, samples=5 00:10:22.375 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:22.375 lat (msec) : 50=98.61% 00:10:22.375 cpu : usr=0.07%, sys=0.00%, ctx=73, majf=0, minf=2 00:10:22.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.375 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.375 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.375 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1581760: Tue Nov 19 10:37:29 2024 00:10:22.375 read: IOPS=1383, BW=5534KiB/s (5667kB/s)(14.9MiB/2754msec) 00:10:22.375 slat (nsec): min=7027, max=40981, avg=8230.50, stdev=1686.41 00:10:22.375 clat (usec): min=186, max=42063, avg=706.88, stdev=4153.31 00:10:22.375 lat (usec): min=193, max=42081, avg=715.10, stdev=4153.73 00:10:22.375 clat percentiles (usec): 00:10:22.375 | 1.00th=[ 210], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:10:22.375 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:22.375 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 293], 95.00th=[ 302], 00:10:22.375 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:10:22.375 | 99.99th=[42206] 00:10:22.375 bw ( KiB/s): min= 96, max=13776, per=31.18%, avg=6086.40, stdev=6028.73, samples=5 00:10:22.375 iops : min= 24, max= 3444, avg=1521.60, stdev=1507.18, samples=5 00:10:22.375 lat (usec) : 250=3.46%, 500=95.46% 00:10:22.375 lat (msec) : 50=1.05% 00:10:22.375 cpu : usr=0.62%, sys=2.40%, ctx=3811, majf=0, minf=2 00:10:22.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.375 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.375 issued rwts: total=3811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.375 00:10:22.375 Run status group 0 (all jobs): 00:10:22.375 READ: bw=19.1MiB/s (20.0MB/s), 96.7KiB/s-15.5MiB/s (99.0kB/s-16.3MB/s), io=64.8MiB (67.9MB), run=2754-3399msec 00:10:22.375 00:10:22.375 Disk stats (read/write): 00:10:22.375 nvme0n1: ios=12456/0, merge=0/0, ticks=3889/0, in_queue=3889, util=99.48% 00:10:22.375 nvme0n2: ios=234/0, merge=0/0, ticks=4221/0, in_queue=4221, util=99.63% 00:10:22.375 nvme0n3: ios=69/0, merge=0/0, ticks=2833/0, in_queue=2833, util=96.48% 00:10:22.375 nvme0n4: ios=3806/0, merge=0/0, ticks=2484/0, in_queue=2484, util=96.44% 00:10:22.375 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.632 10:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:22.632 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.632 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:22.889 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.889 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:23.146 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.146 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1581534 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:23.403 nvmf hotplug test: fio failed as expected 00:10:23.403 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.660 rmmod nvme_tcp 00:10:23.660 rmmod nvme_fabrics 00:10:23.660 rmmod nvme_keyring 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1578824 ']' 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1578824 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1578824 ']' 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1578824 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.660 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1578824 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1578824' 00:10:23.919 killing process with pid 1578824 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1578824 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1578824 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.919 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:26.453 00:10:26.453 real 0m26.849s 00:10:26.453 user 1m46.604s 00:10:26.453 sys 0m8.301s 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.453 ************************************ 00:10:26.453 END TEST nvmf_fio_target 00:10:26.453 ************************************ 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.453 ************************************ 00:10:26.453 START TEST nvmf_bdevio 00:10:26.453 ************************************ 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:26.453 * Looking for test storage... 00:10:26.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.453 --rc genhtml_branch_coverage=1 00:10:26.453 --rc genhtml_function_coverage=1 00:10:26.453 --rc genhtml_legend=1 00:10:26.453 --rc geninfo_all_blocks=1 00:10:26.453 --rc geninfo_unexecuted_blocks=1 00:10:26.453 00:10:26.453 ' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.453 --rc genhtml_branch_coverage=1 00:10:26.453 --rc genhtml_function_coverage=1 00:10:26.453 --rc genhtml_legend=1 00:10:26.453 --rc geninfo_all_blocks=1 00:10:26.453 --rc geninfo_unexecuted_blocks=1 00:10:26.453 00:10:26.453 ' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.453 --rc genhtml_branch_coverage=1 00:10:26.453 --rc genhtml_function_coverage=1 00:10:26.453 --rc genhtml_legend=1 00:10:26.453 --rc geninfo_all_blocks=1 00:10:26.453 --rc geninfo_unexecuted_blocks=1 00:10:26.453 00:10:26.453 ' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.453 --rc genhtml_branch_coverage=1 00:10:26.453 --rc genhtml_function_coverage=1 00:10:26.453 --rc genhtml_legend=1 00:10:26.453 --rc geninfo_all_blocks=1 00:10:26.453 --rc geninfo_unexecuted_blocks=1 00:10:26.453 00:10:26.453 ' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.453 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:26.454 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:33.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:33.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:33.024 Found net devices under 0000:86:00.0: cvl_0_0 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:33.024 Found net devices under 0000:86:00.1: cvl_0_1 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.024 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:33.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:10:33.025 00:10:33.025 --- 10.0.0.2 ping statistics --- 00:10:33.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.025 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:10:33.025 00:10:33.025 --- 10.0.0.1 ping statistics --- 00:10:33.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.025 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1586140 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1586140 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1586140 ']' 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.025 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.025 [2024-11-19 10:37:39.702764] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:33.025 [2024-11-19 10:37:39.702807] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.025 [2024-11-19 10:37:39.781075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.025 [2024-11-19 10:37:39.823967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.025 [2024-11-19 10:37:39.824000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.025 [2024-11-19 10:37:39.824007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.025 [2024-11-19 10:37:39.824013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.025 [2024-11-19 10:37:39.824019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.025 [2024-11-19 10:37:39.825482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:33.025 [2024-11-19 10:37:39.825592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:33.025 [2024-11-19 10:37:39.825693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.025 [2024-11-19 10:37:39.825694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.283 [2024-11-19 10:37:40.591798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:33.283 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.284 Malloc0 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.284 [2024-11-19 10:37:40.652711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:33.284 { 00:10:33.284 "params": { 00:10:33.284 "name": "Nvme$subsystem", 00:10:33.284 "trtype": "$TEST_TRANSPORT", 00:10:33.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.284 "adrfam": "ipv4", 00:10:33.284 "trsvcid": "$NVMF_PORT", 00:10:33.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.284 "hdgst": ${hdgst:-false}, 00:10:33.284 "ddgst": ${ddgst:-false} 00:10:33.284 }, 00:10:33.284 "method": "bdev_nvme_attach_controller" 00:10:33.284 } 00:10:33.284 EOF 00:10:33.284 )") 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:33.284 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:33.284 "params": { 00:10:33.284 "name": "Nvme1", 00:10:33.284 "trtype": "tcp", 00:10:33.284 "traddr": "10.0.0.2", 00:10:33.284 "adrfam": "ipv4", 00:10:33.284 "trsvcid": "4420", 00:10:33.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.284 "hdgst": false, 00:10:33.284 "ddgst": false 00:10:33.284 }, 00:10:33.284 "method": "bdev_nvme_attach_controller" 00:10:33.284 }' 00:10:33.284 [2024-11-19 10:37:40.702138] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:10:33.284 [2024-11-19 10:37:40.702182] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586387 ] 00:10:33.543 [2024-11-19 10:37:40.777092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.543 [2024-11-19 10:37:40.821315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.543 [2024-11-19 10:37:40.821426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.543 [2024-11-19 10:37:40.821426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.543 I/O targets: 00:10:33.543 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:33.543 00:10:33.543 00:10:33.543 CUnit - A unit testing framework for C - Version 2.1-3 00:10:33.543 http://cunit.sourceforge.net/ 00:10:33.543 00:10:33.543 00:10:33.543 Suite: bdevio tests on: Nvme1n1 00:10:33.801 Test: blockdev write read block ...passed 00:10:33.801 Test: blockdev write zeroes read block ...passed 00:10:33.801 Test: blockdev write zeroes read no split ...passed 00:10:33.801 Test: blockdev write zeroes read split ...passed 00:10:33.801 Test: blockdev write zeroes read split partial ...passed 00:10:33.801 Test: blockdev reset ...[2024-11-19 10:37:41.139446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:33.801 [2024-11-19 10:37:41.139513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2366340 (9): Bad file descriptor 00:10:33.801 [2024-11-19 10:37:41.240593] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:33.801 passed 00:10:34.059 Test: blockdev write read 8 blocks ...passed 00:10:34.059 Test: blockdev write read size > 128k ...passed 00:10:34.059 Test: blockdev write read invalid size ...passed 00:10:34.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:34.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:34.059 Test: blockdev write read max offset ...passed 00:10:34.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:34.059 Test: blockdev writev readv 8 blocks ...passed 00:10:34.059 Test: blockdev writev readv 30 x 1block ...passed 00:10:34.059 Test: blockdev writev readv block ...passed 00:10:34.059 Test: blockdev writev readv size > 128k ...passed 00:10:34.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:34.059 Test: blockdev comparev and writev ...[2024-11-19 10:37:41.454615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.059 [2024-11-19 10:37:41.454643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:34.059 [2024-11-19 10:37:41.454658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.059 [2024-11-19 10:37:41.454666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:34.059 [2024-11-19 10:37:41.454909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.059 [2024-11-19 10:37:41.454919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:34.059 [2024-11-19 10:37:41.454931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.059 [2024-11-19 10:37:41.454938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:34.059 [2024-11-19 10:37:41.455172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.059 [2024-11-19 10:37:41.455182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:34.059 [2024-11-19 10:37:41.455193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.059 [2024-11-19 10:37:41.455205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:34.059 [2024-11-19 10:37:41.455449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.059 [2024-11-19 10:37:41.455459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:34.059 [2024-11-19 10:37:41.455471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:34.059 [2024-11-19 10:37:41.455478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:34.059 passed 00:10:34.317 Test: blockdev nvme passthru rw ...passed 00:10:34.317 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:37:41.539281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.317 [2024-11-19 10:37:41.539298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:34.317 [2024-11-19 10:37:41.539400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.317 [2024-11-19 10:37:41.539410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:34.317 [2024-11-19 10:37:41.539510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.317 [2024-11-19 10:37:41.539519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:34.317 [2024-11-19 10:37:41.539620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:34.317 [2024-11-19 10:37:41.539629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:34.317 passed 00:10:34.317 Test: blockdev nvme admin passthru ...passed 00:10:34.317 Test: blockdev copy ...passed 00:10:34.317 00:10:34.317 Run Summary: Type Total Ran Passed Failed Inactive 00:10:34.317 suites 1 1 n/a 0 0 00:10:34.317 tests 23 23 23 0 0 00:10:34.317 asserts 152 152 152 0 n/a 00:10:34.317 00:10:34.317 Elapsed time = 1.227 seconds 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.318 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.318 rmmod nvme_tcp 00:10:34.318 rmmod nvme_fabrics 00:10:34.576 rmmod nvme_keyring 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1586140 ']' 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1586140 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1586140 ']' 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1586140 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1586140 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1586140' 00:10:34.576 killing process with pid 1586140 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1586140 00:10:34.576 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1586140 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.835 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.829 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.829 00:10:36.829 real 0m10.666s 00:10:36.829 user 0m12.756s 00:10:36.829 sys 0m5.054s 00:10:36.829 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.829 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.829 ************************************ 00:10:36.829 END TEST nvmf_bdevio 00:10:36.829 ************************************ 00:10:36.829 10:37:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:36.829 00:10:36.829 real 4m38.063s 00:10:36.829 user 10m25.567s 00:10:36.829 sys 1m38.075s 00:10:36.829 10:37:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.829 10:37:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.829 ************************************ 00:10:36.829 END TEST nvmf_target_core 00:10:36.829 ************************************ 00:10:36.829 10:37:44 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.829 10:37:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.829 10:37:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.829 10:37:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.158 ************************************ 00:10:37.158 START TEST nvmf_target_extra 00:10:37.158 ************************************ 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:37.158 * Looking for test storage... 00:10:37.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:37.158 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.159 --rc genhtml_branch_coverage=1 00:10:37.159 --rc genhtml_function_coverage=1 00:10:37.159 --rc genhtml_legend=1 00:10:37.159 --rc geninfo_all_blocks=1 00:10:37.159 --rc geninfo_unexecuted_blocks=1 00:10:37.159 00:10:37.159 ' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.159 --rc genhtml_branch_coverage=1 00:10:37.159 --rc genhtml_function_coverage=1 00:10:37.159 --rc genhtml_legend=1 00:10:37.159 --rc geninfo_all_blocks=1 00:10:37.159 --rc geninfo_unexecuted_blocks=1 00:10:37.159 00:10:37.159 ' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.159 --rc genhtml_branch_coverage=1 00:10:37.159 --rc genhtml_function_coverage=1 00:10:37.159 --rc genhtml_legend=1 00:10:37.159 --rc geninfo_all_blocks=1 00:10:37.159 --rc geninfo_unexecuted_blocks=1 00:10:37.159 00:10:37.159 ' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.159 --rc genhtml_branch_coverage=1 00:10:37.159 --rc genhtml_function_coverage=1 00:10:37.159 --rc genhtml_legend=1 00:10:37.159 --rc geninfo_all_blocks=1 00:10:37.159 --rc geninfo_unexecuted_blocks=1 00:10:37.159 00:10:37.159 ' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.159 ************************************ 00:10:37.159 START TEST nvmf_example 00:10:37.159 ************************************ 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:37.159 * Looking for test storage... 00:10:37.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.159 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.419 --rc genhtml_branch_coverage=1 00:10:37.419 --rc genhtml_function_coverage=1 00:10:37.419 --rc genhtml_legend=1 00:10:37.419 --rc geninfo_all_blocks=1 00:10:37.419 --rc geninfo_unexecuted_blocks=1 00:10:37.419 00:10:37.419 ' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.419 --rc genhtml_branch_coverage=1 00:10:37.419 --rc genhtml_function_coverage=1 00:10:37.419 --rc genhtml_legend=1 00:10:37.419 --rc geninfo_all_blocks=1 00:10:37.419 --rc geninfo_unexecuted_blocks=1 00:10:37.419 00:10:37.419 ' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.419 --rc genhtml_branch_coverage=1 00:10:37.419 --rc genhtml_function_coverage=1 00:10:37.419 --rc genhtml_legend=1 00:10:37.419 --rc geninfo_all_blocks=1 00:10:37.419 --rc geninfo_unexecuted_blocks=1 00:10:37.419 00:10:37.419 ' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.419 --rc genhtml_branch_coverage=1 00:10:37.419 --rc genhtml_function_coverage=1 00:10:37.419 --rc genhtml_legend=1 00:10:37.419 --rc geninfo_all_blocks=1 00:10:37.419 --rc geninfo_unexecuted_blocks=1 00:10:37.419 00:10:37.419 ' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:37.419 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.420 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:43.988 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:43.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:43.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:43.989 Found net devices under 0000:86:00.0: cvl_0_0 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:43.989 Found net devices under 0000:86:00.1: cvl_0_1 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:10:43.989 00:10:43.989 --- 10.0.0.2 ping statistics --- 00:10:43.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.989 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:43.989 00:10:43.989 --- 10.0.0.1 ping statistics --- 00:10:43.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.989 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1590217 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1590217 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1590217 ']' 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.989 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.248 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.248 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:44.248 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:44.248 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:44.249 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:56.457 Initializing NVMe Controllers 00:10:56.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:56.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:56.457 Initialization complete. Launching workers. 00:10:56.457 ======================================================== 00:10:56.457 Latency(us) 00:10:56.457 Device Information : IOPS MiB/s Average min max 00:10:56.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17900.45 69.92 3574.88 519.47 15672.11 00:10:56.457 ======================================================== 00:10:56.457 Total : 17900.45 69.92 3574.88 519.47 15672.11 00:10:56.457 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.457 rmmod nvme_tcp 00:10:56.457 rmmod nvme_fabrics 00:10:56.457 rmmod nvme_keyring 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:56.457 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1590217 ']' 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1590217 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1590217 ']' 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1590217 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1590217 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1590217' 00:10:56.458 killing process with pid 1590217 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1590217 00:10:56.458 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1590217 00:10:56.458 nvmf threads initialize successfully 00:10:56.458 bdev subsystem init successfully 00:10:56.458 created a nvmf target service 00:10:56.458 create targets's poll groups done 00:10:56.458 all subsystems of target started 00:10:56.458 nvmf target is running 00:10:56.458 all subsystems of target stopped 00:10:56.458 destroy targets's poll groups done 00:10:56.458 destroyed the nvmf target service 00:10:56.458 bdev subsystem finish successfully 00:10:56.458 nvmf threads destroy successfully 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.458 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.025 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.026 00:10:57.026 real 0m19.808s 00:10:57.026 user 0m46.094s 00:10:57.026 sys 0m6.047s 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.026 ************************************ 00:10:57.026 END TEST nvmf_example 00:10:57.026 ************************************ 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.026 ************************************ 00:10:57.026 START TEST nvmf_filesystem 00:10:57.026 ************************************ 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:57.026 * Looking for test storage... 00:10:57.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.026 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.288 --rc genhtml_branch_coverage=1 00:10:57.288 --rc genhtml_function_coverage=1 00:10:57.288 --rc genhtml_legend=1 00:10:57.288 --rc geninfo_all_blocks=1 00:10:57.288 --rc geninfo_unexecuted_blocks=1 00:10:57.288 00:10:57.288 ' 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.288 --rc genhtml_branch_coverage=1 00:10:57.288 --rc genhtml_function_coverage=1 00:10:57.288 --rc genhtml_legend=1 00:10:57.288 --rc geninfo_all_blocks=1 00:10:57.288 --rc geninfo_unexecuted_blocks=1 00:10:57.288 00:10:57.288 ' 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.288 --rc genhtml_branch_coverage=1 00:10:57.288 --rc genhtml_function_coverage=1 00:10:57.288 --rc genhtml_legend=1 00:10:57.288 --rc geninfo_all_blocks=1 00:10:57.288 --rc geninfo_unexecuted_blocks=1 00:10:57.288 00:10:57.288 ' 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.288 --rc genhtml_branch_coverage=1 00:10:57.288 --rc genhtml_function_coverage=1 00:10:57.288 --rc genhtml_legend=1 00:10:57.288 --rc geninfo_all_blocks=1 00:10:57.288 --rc geninfo_unexecuted_blocks=1 00:10:57.288 00:10:57.288 ' 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:57.288 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:57.289 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:57.289 #define SPDK_CONFIG_H 00:10:57.289 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:57.289 #define SPDK_CONFIG_APPS 1 00:10:57.289 #define SPDK_CONFIG_ARCH native 00:10:57.289 #undef SPDK_CONFIG_ASAN 00:10:57.289 #undef SPDK_CONFIG_AVAHI 00:10:57.289 #undef SPDK_CONFIG_CET 00:10:57.289 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:57.289 #define SPDK_CONFIG_COVERAGE 1 00:10:57.289 #define SPDK_CONFIG_CROSS_PREFIX 00:10:57.289 #undef SPDK_CONFIG_CRYPTO 00:10:57.289 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:57.289 #undef SPDK_CONFIG_CUSTOMOCF 00:10:57.289 #undef SPDK_CONFIG_DAOS 00:10:57.289 #define SPDK_CONFIG_DAOS_DIR 00:10:57.289 #define SPDK_CONFIG_DEBUG 1 00:10:57.289 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:57.289 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:57.289 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:57.289 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:57.289 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:57.289 #undef SPDK_CONFIG_DPDK_UADK 00:10:57.289 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:57.289 #define SPDK_CONFIG_EXAMPLES 1 00:10:57.289 #undef SPDK_CONFIG_FC 00:10:57.289 #define SPDK_CONFIG_FC_PATH 00:10:57.289 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:57.289 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:57.289 #define SPDK_CONFIG_FSDEV 1 00:10:57.289 #undef SPDK_CONFIG_FUSE 00:10:57.289 #undef SPDK_CONFIG_FUZZER 00:10:57.289 #define SPDK_CONFIG_FUZZER_LIB 00:10:57.289 #undef SPDK_CONFIG_GOLANG 00:10:57.289 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:57.289 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:57.289 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:57.289 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:57.289 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:57.289 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:57.289 #undef SPDK_CONFIG_HAVE_LZ4 00:10:57.289 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:57.289 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:57.289 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:57.289 #define SPDK_CONFIG_IDXD 1 00:10:57.289 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:57.289 #undef SPDK_CONFIG_IPSEC_MB 00:10:57.289 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:57.289 #define SPDK_CONFIG_ISAL 1 00:10:57.289 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:57.289 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:57.289 #define SPDK_CONFIG_LIBDIR 00:10:57.289 #undef SPDK_CONFIG_LTO 00:10:57.289 #define SPDK_CONFIG_MAX_LCORES 128 00:10:57.289 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:57.289 #define SPDK_CONFIG_NVME_CUSE 1 00:10:57.289 #undef SPDK_CONFIG_OCF 00:10:57.289 #define SPDK_CONFIG_OCF_PATH 00:10:57.289 #define SPDK_CONFIG_OPENSSL_PATH 00:10:57.289 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:57.289 #define SPDK_CONFIG_PGO_DIR 00:10:57.289 #undef SPDK_CONFIG_PGO_USE 00:10:57.289 #define SPDK_CONFIG_PREFIX /usr/local 00:10:57.289 #undef SPDK_CONFIG_RAID5F 00:10:57.289 #undef SPDK_CONFIG_RBD 00:10:57.289 #define SPDK_CONFIG_RDMA 1 00:10:57.289 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:57.289 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:57.289 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:57.290 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:57.290 #define SPDK_CONFIG_SHARED 1 00:10:57.290 #undef SPDK_CONFIG_SMA 00:10:57.290 #define SPDK_CONFIG_TESTS 1 00:10:57.290 #undef SPDK_CONFIG_TSAN 00:10:57.290 #define SPDK_CONFIG_UBLK 1 00:10:57.290 #define SPDK_CONFIG_UBSAN 1 00:10:57.290 #undef SPDK_CONFIG_UNIT_TESTS 00:10:57.290 #undef SPDK_CONFIG_URING 00:10:57.290 #define SPDK_CONFIG_URING_PATH 00:10:57.290 #undef SPDK_CONFIG_URING_ZNS 00:10:57.290 #undef SPDK_CONFIG_USDT 00:10:57.290 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:57.290 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:57.290 #define SPDK_CONFIG_VFIO_USER 1 00:10:57.290 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:57.290 #define SPDK_CONFIG_VHOST 1 00:10:57.290 #define SPDK_CONFIG_VIRTIO 1 00:10:57.290 #undef SPDK_CONFIG_VTUNE 00:10:57.290 #define SPDK_CONFIG_VTUNE_DIR 00:10:57.290 #define SPDK_CONFIG_WERROR 1 00:10:57.290 #define SPDK_CONFIG_WPDK_DIR 00:10:57.290 #undef SPDK_CONFIG_XNVME 00:10:57.290 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:57.290 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.291 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1592596 ]] 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1592596 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.bHQSu6 00:10:57.292 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.bHQSu6/tests/target /tmp/spdk.bHQSu6 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189205721088 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6758240256 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981636608 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=344064 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:57.293 * Looking for test storage... 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189205721088 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8972832768 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.293 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.553 --rc genhtml_branch_coverage=1 00:10:57.553 --rc genhtml_function_coverage=1 00:10:57.553 --rc genhtml_legend=1 00:10:57.553 --rc geninfo_all_blocks=1 00:10:57.553 --rc geninfo_unexecuted_blocks=1 00:10:57.553 00:10:57.553 ' 00:10:57.553 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.553 --rc genhtml_branch_coverage=1 00:10:57.554 --rc genhtml_function_coverage=1 00:10:57.554 --rc genhtml_legend=1 00:10:57.554 --rc geninfo_all_blocks=1 00:10:57.554 --rc geninfo_unexecuted_blocks=1 00:10:57.554 00:10:57.554 ' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.554 --rc genhtml_branch_coverage=1 00:10:57.554 --rc genhtml_function_coverage=1 00:10:57.554 --rc genhtml_legend=1 00:10:57.554 --rc geninfo_all_blocks=1 00:10:57.554 --rc geninfo_unexecuted_blocks=1 00:10:57.554 00:10:57.554 ' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.554 --rc genhtml_branch_coverage=1 00:10:57.554 --rc genhtml_function_coverage=1 00:10:57.554 --rc genhtml_legend=1 00:10:57.554 --rc geninfo_all_blocks=1 00:10:57.554 --rc geninfo_unexecuted_blocks=1 00:10:57.554 00:10:57.554 ' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.554 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.124 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:04.125 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:04.125 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:04.125 Found net devices under 0000:86:00.0: cvl_0_0 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:04.125 Found net devices under 0000:86:00.1: cvl_0_1 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:11:04.125 00:11:04.125 --- 10.0.0.2 ping statistics --- 00:11:04.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.125 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:04.125 00:11:04.125 --- 10.0.0.1 ping statistics --- 00:11:04.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.125 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.125 ************************************ 00:11:04.125 START TEST nvmf_filesystem_no_in_capsule 00:11:04.125 ************************************ 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:04.125 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1595664 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1595664 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1595664 ']' 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.126 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 [2024-11-19 10:38:10.897584] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:11:04.126 [2024-11-19 10:38:10.897624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.126 [2024-11-19 10:38:10.960105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.126 [2024-11-19 10:38:11.003476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.126 [2024-11-19 10:38:11.003513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.126 [2024-11-19 10:38:11.003520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.126 [2024-11-19 10:38:11.003526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.126 [2024-11-19 10:38:11.003531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.126 [2024-11-19 10:38:11.007966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.126 [2024-11-19 10:38:11.008009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.126 [2024-11-19 10:38:11.008120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.126 [2024-11-19 10:38:11.008121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 [2024-11-19 10:38:11.144463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 Malloc1 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 [2024-11-19 10:38:11.291868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:04.126 { 00:11:04.126 "name": "Malloc1", 00:11:04.126 "aliases": [ 00:11:04.126 "b2905f34-108c-45bc-b836-462523cd2e14" 00:11:04.126 ], 00:11:04.126 "product_name": "Malloc disk", 00:11:04.126 "block_size": 512, 00:11:04.126 "num_blocks": 1048576, 00:11:04.126 "uuid": "b2905f34-108c-45bc-b836-462523cd2e14", 00:11:04.126 "assigned_rate_limits": { 00:11:04.126 "rw_ios_per_sec": 0, 00:11:04.126 "rw_mbytes_per_sec": 0, 00:11:04.126 "r_mbytes_per_sec": 0, 00:11:04.127 "w_mbytes_per_sec": 0 00:11:04.127 }, 00:11:04.127 "claimed": true, 00:11:04.127 "claim_type": "exclusive_write", 00:11:04.127 "zoned": false, 00:11:04.127 "supported_io_types": { 00:11:04.127 "read": true, 00:11:04.127 "write": true, 00:11:04.127 "unmap": true, 00:11:04.127 "flush": true, 00:11:04.127 "reset": true, 00:11:04.127 "nvme_admin": false, 00:11:04.127 "nvme_io": false, 00:11:04.127 "nvme_io_md": false, 00:11:04.127 "write_zeroes": true, 00:11:04.127 "zcopy": true, 00:11:04.127 "get_zone_info": false, 00:11:04.127 "zone_management": false, 00:11:04.127 "zone_append": false, 00:11:04.127 "compare": false, 00:11:04.127 "compare_and_write": false, 00:11:04.127 "abort": true, 00:11:04.127 "seek_hole": false, 00:11:04.127 "seek_data": false, 00:11:04.127 "copy": true, 00:11:04.127 "nvme_iov_md": false 00:11:04.127 }, 00:11:04.127 "memory_domains": [ 00:11:04.127 { 00:11:04.127 "dma_device_id": "system", 00:11:04.127 "dma_device_type": 1 00:11:04.127 }, 00:11:04.127 { 00:11:04.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.127 "dma_device_type": 2 00:11:04.127 } 00:11:04.127 ], 00:11:04.127 "driver_specific": {} 00:11:04.127 } 00:11:04.127 ]' 00:11:04.127 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:04.127 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:04.127 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:04.127 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:04.127 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:04.127 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:04.127 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:04.127 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.500 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.500 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:05.500 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.500 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:05.500 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:07.401 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:07.660 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:08.226 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.161 ************************************ 00:11:09.161 START TEST filesystem_ext4 00:11:09.161 ************************************ 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:09.161 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:09.162 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:09.162 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:09.162 mke2fs 1.47.0 (5-Feb-2023) 00:11:09.162 Discarding device blocks: 0/522240 done 00:11:09.162 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:09.162 Filesystem UUID: d603c5a9-24e5-4307-a2bc-363eae3316b1 00:11:09.162 Superblock backups stored on blocks: 00:11:09.162 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:09.162 00:11:09.162 Allocating group tables: 0/64 done 00:11:09.162 Writing inode tables: 0/64 done 00:11:09.420 Creating journal (8192 blocks): done 00:11:10.354 Writing superblocks and filesystem accounting information: 0/64 done 00:11:10.354 00:11:10.355 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:10.355 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1595664 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.915 00:11:16.915 real 0m7.245s 00:11:16.915 user 0m0.021s 00:11:16.915 sys 0m0.078s 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:16.915 ************************************ 00:11:16.915 END TEST filesystem_ext4 00:11:16.915 ************************************ 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.915 ************************************ 00:11:16.915 START TEST filesystem_btrfs 00:11:16.915 ************************************ 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:16.915 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:16.915 btrfs-progs v6.8.1 00:11:16.915 See https://btrfs.readthedocs.io for more information. 00:11:16.915 00:11:16.915 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:16.915 NOTE: several default settings have changed in version 5.15, please make sure 00:11:16.915 this does not affect your deployments: 00:11:16.915 - DUP for metadata (-m dup) 00:11:16.915 - enabled no-holes (-O no-holes) 00:11:16.916 - enabled free-space-tree (-R free-space-tree) 00:11:16.916 00:11:16.916 Label: (null) 00:11:16.916 UUID: b40fe87b-6f87-48d0-a0c9-e5a6e243f64b 00:11:16.916 Node size: 16384 00:11:16.916 Sector size: 4096 (CPU page size: 4096) 00:11:16.916 Filesystem size: 510.00MiB 00:11:16.916 Block group profiles: 00:11:16.916 Data: single 8.00MiB 00:11:16.916 Metadata: DUP 32.00MiB 00:11:16.916 System: DUP 8.00MiB 00:11:16.916 SSD detected: yes 00:11:16.916 Zoned device: no 00:11:16.916 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:16.916 Checksum: crc32c 00:11:16.916 Number of devices: 1 00:11:16.916 Devices: 00:11:16.916 ID SIZE PATH 00:11:16.916 1 510.00MiB /dev/nvme0n1p1 00:11:16.916 00:11:16.916 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:16.916 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1595664 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.916 00:11:16.916 real 0m0.491s 00:11:16.916 user 0m0.032s 00:11:16.916 sys 0m0.104s 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.916 ************************************ 00:11:16.916 END TEST filesystem_btrfs 00:11:16.916 ************************************ 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.916 ************************************ 00:11:16.916 START TEST filesystem_xfs 00:11:16.916 ************************************ 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:16.916 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:17.173 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:17.173 = sectsz=512 attr=2, projid32bit=1 00:11:17.173 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:17.173 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:17.173 data = bsize=4096 blocks=130560, imaxpct=25 00:11:17.173 = sunit=0 swidth=0 blks 00:11:17.173 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:17.173 log =internal log bsize=4096 blocks=16384, version=2 00:11:17.173 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:17.173 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:18.106 Discarding blocks...Done. 00:11:18.106 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:18.106 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.634 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.634 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:20.634 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.634 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:20.634 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:20.634 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.634 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1595664 00:11:20.634 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.634 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.634 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.634 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.634 00:11:20.634 real 0m3.710s 00:11:20.634 user 0m0.025s 00:11:20.634 sys 0m0.076s 00:11:20.634 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.634 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:20.634 ************************************ 00:11:20.634 END TEST filesystem_xfs 00:11:20.634 ************************************ 00:11:20.634 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1595664 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1595664 ']' 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1595664 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1595664 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1595664' 00:11:20.892 killing process with pid 1595664 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1595664 00:11:20.892 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1595664 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:21.459 00:11:21.459 real 0m17.787s 00:11:21.459 user 1m10.057s 00:11:21.459 sys 0m1.424s 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.459 ************************************ 00:11:21.459 END TEST nvmf_filesystem_no_in_capsule 00:11:21.459 ************************************ 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.459 ************************************ 00:11:21.459 START TEST nvmf_filesystem_in_capsule 00:11:21.459 ************************************ 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1598877 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1598877 00:11:21.459 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.460 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1598877 ']' 00:11:21.460 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.460 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.460 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.460 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.460 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.460 [2024-11-19 10:38:28.764893] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:11:21.460 [2024-11-19 10:38:28.764937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.460 [2024-11-19 10:38:28.841587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.460 [2024-11-19 10:38:28.880013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.460 [2024-11-19 10:38:28.880054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.460 [2024-11-19 10:38:28.880061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.460 [2024-11-19 10:38:28.880068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.460 [2024-11-19 10:38:28.880073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.460 [2024-11-19 10:38:28.881732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.460 [2024-11-19 10:38:28.881842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.460 [2024-11-19 10:38:28.881928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.460 [2024-11-19 10:38:28.881927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.395 [2024-11-19 10:38:29.649467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.395 Malloc1 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.395 [2024-11-19 10:38:29.800573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:22.395 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:22.396 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:22.396 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.396 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.396 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.396 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:22.396 { 00:11:22.396 "name": "Malloc1", 00:11:22.396 "aliases": [ 00:11:22.396 "a92560c3-3687-427d-b578-779360ecb889" 00:11:22.396 ], 00:11:22.396 "product_name": "Malloc disk", 00:11:22.396 "block_size": 512, 00:11:22.396 "num_blocks": 1048576, 00:11:22.396 "uuid": "a92560c3-3687-427d-b578-779360ecb889", 00:11:22.396 "assigned_rate_limits": { 00:11:22.396 "rw_ios_per_sec": 0, 00:11:22.396 "rw_mbytes_per_sec": 0, 00:11:22.396 "r_mbytes_per_sec": 0, 00:11:22.396 "w_mbytes_per_sec": 0 00:11:22.396 }, 00:11:22.396 "claimed": true, 00:11:22.396 "claim_type": "exclusive_write", 00:11:22.396 "zoned": false, 00:11:22.396 "supported_io_types": { 00:11:22.396 "read": true, 00:11:22.396 "write": true, 00:11:22.396 "unmap": true, 00:11:22.396 "flush": true, 00:11:22.396 "reset": true, 00:11:22.396 "nvme_admin": false, 00:11:22.396 "nvme_io": false, 00:11:22.396 "nvme_io_md": false, 00:11:22.396 "write_zeroes": true, 00:11:22.396 "zcopy": true, 00:11:22.396 "get_zone_info": false, 00:11:22.396 "zone_management": false, 00:11:22.396 "zone_append": false, 00:11:22.396 "compare": false, 00:11:22.396 "compare_and_write": false, 00:11:22.396 "abort": true, 00:11:22.396 "seek_hole": false, 00:11:22.396 "seek_data": false, 00:11:22.396 "copy": true, 00:11:22.396 "nvme_iov_md": false 00:11:22.396 }, 00:11:22.396 "memory_domains": [ 00:11:22.396 { 00:11:22.396 "dma_device_id": "system", 00:11:22.396 "dma_device_type": 1 00:11:22.396 }, 00:11:22.396 { 00:11:22.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.396 "dma_device_type": 2 00:11:22.396 } 00:11:22.396 ], 00:11:22.396 "driver_specific": {} 00:11:22.396 } 00:11:22.396 ]' 00:11:22.396 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:22.657 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:22.657 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:22.657 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:22.657 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:22.657 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:22.657 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:22.657 10:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.032 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.032 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:24.032 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.032 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:24.032 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:25.932 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:26.866 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.801 ************************************ 00:11:27.801 START TEST filesystem_in_capsule_ext4 00:11:27.801 ************************************ 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:27.801 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:27.801 mke2fs 1.47.0 (5-Feb-2023) 00:11:27.801 Discarding device blocks: 0/522240 done 00:11:28.059 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:28.059 Filesystem UUID: 83ed116a-5e06-4778-9665-2862ff036d0e 00:11:28.059 Superblock backups stored on blocks: 00:11:28.059 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:28.059 00:11:28.059 Allocating group tables: 0/64 done 00:11:28.059 Writing inode tables: 0/64 done 00:11:28.059 Creating journal (8192 blocks): done 00:11:29.434 Writing superblocks and filesystem accounting information: 0/64 done 00:11:29.434 00:11:29.434 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:29.434 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:35.989 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:35.989 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:35.989 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:35.989 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1598877 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:35.989 00:11:35.989 real 0m7.943s 00:11:35.989 user 0m0.028s 00:11:35.989 sys 0m0.071s 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:35.989 ************************************ 00:11:35.989 END TEST filesystem_in_capsule_ext4 00:11:35.989 ************************************ 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.989 ************************************ 00:11:35.989 START TEST filesystem_in_capsule_btrfs 00:11:35.989 ************************************ 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:35.989 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:36.248 btrfs-progs v6.8.1 00:11:36.248 See https://btrfs.readthedocs.io for more information. 00:11:36.248 00:11:36.248 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:36.248 NOTE: several default settings have changed in version 5.15, please make sure 00:11:36.248 this does not affect your deployments: 00:11:36.248 - DUP for metadata (-m dup) 00:11:36.248 - enabled no-holes (-O no-holes) 00:11:36.248 - enabled free-space-tree (-R free-space-tree) 00:11:36.248 00:11:36.248 Label: (null) 00:11:36.248 UUID: 057f0db6-f9ff-4ed1-9bcd-8b314cdb8fe3 00:11:36.248 Node size: 16384 00:11:36.248 Sector size: 4096 (CPU page size: 4096) 00:11:36.248 Filesystem size: 510.00MiB 00:11:36.248 Block group profiles: 00:11:36.248 Data: single 8.00MiB 00:11:36.248 Metadata: DUP 32.00MiB 00:11:36.248 System: DUP 8.00MiB 00:11:36.248 SSD detected: yes 00:11:36.248 Zoned device: no 00:11:36.248 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:36.248 Checksum: crc32c 00:11:36.248 Number of devices: 1 00:11:36.248 Devices: 00:11:36.248 ID SIZE PATH 00:11:36.248 1 510.00MiB /dev/nvme0n1p1 00:11:36.248 00:11:36.248 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:36.248 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1598877 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.506 00:11:36.506 real 0m0.694s 00:11:36.506 user 0m0.029s 00:11:36.506 sys 0m0.114s 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:36.506 ************************************ 00:11:36.506 END TEST filesystem_in_capsule_btrfs 00:11:36.506 ************************************ 00:11:36.506 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.507 ************************************ 00:11:36.507 START TEST filesystem_in_capsule_xfs 00:11:36.507 ************************************ 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:36.507 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:36.765 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:36.765 = sectsz=512 attr=2, projid32bit=1 00:11:36.765 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:36.765 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:36.765 data = bsize=4096 blocks=130560, imaxpct=25 00:11:36.765 = sunit=0 swidth=0 blks 00:11:36.765 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:36.765 log =internal log bsize=4096 blocks=16384, version=2 00:11:36.765 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:36.765 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:37.697 Discarding blocks...Done. 00:11:37.697 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:37.697 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1598877 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.597 00:11:39.597 real 0m3.057s 00:11:39.597 user 0m0.023s 00:11:39.597 sys 0m0.074s 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.597 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:39.597 ************************************ 00:11:39.597 END TEST filesystem_in_capsule_xfs 00:11:39.598 ************************************ 00:11:39.598 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:39.855 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:39.855 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1598877 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1598877 ']' 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1598877 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1598877 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1598877' 00:11:40.114 killing process with pid 1598877 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1598877 00:11:40.114 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1598877 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:40.373 00:11:40.373 real 0m19.074s 00:11:40.373 user 1m15.242s 00:11:40.373 sys 0m1.490s 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.373 ************************************ 00:11:40.373 END TEST nvmf_filesystem_in_capsule 00:11:40.373 ************************************ 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.373 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.632 rmmod nvme_tcp 00:11:40.632 rmmod nvme_fabrics 00:11:40.632 rmmod nvme_keyring 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.633 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.538 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.538 00:11:42.538 real 0m45.625s 00:11:42.538 user 2m27.357s 00:11:42.538 sys 0m7.645s 00:11:42.538 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.538 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.538 ************************************ 00:11:42.538 END TEST nvmf_filesystem 00:11:42.538 ************************************ 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.798 ************************************ 00:11:42.798 START TEST nvmf_target_discovery 00:11:42.798 ************************************ 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:42.798 * Looking for test storage... 00:11:42.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:42.798 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:42.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.799 --rc genhtml_branch_coverage=1 00:11:42.799 --rc genhtml_function_coverage=1 00:11:42.799 --rc genhtml_legend=1 00:11:42.799 --rc geninfo_all_blocks=1 00:11:42.799 --rc geninfo_unexecuted_blocks=1 00:11:42.799 00:11:42.799 ' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:42.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.799 --rc genhtml_branch_coverage=1 00:11:42.799 --rc genhtml_function_coverage=1 00:11:42.799 --rc genhtml_legend=1 00:11:42.799 --rc geninfo_all_blocks=1 00:11:42.799 --rc geninfo_unexecuted_blocks=1 00:11:42.799 00:11:42.799 ' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:42.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.799 --rc genhtml_branch_coverage=1 00:11:42.799 --rc genhtml_function_coverage=1 00:11:42.799 --rc genhtml_legend=1 00:11:42.799 --rc geninfo_all_blocks=1 00:11:42.799 --rc geninfo_unexecuted_blocks=1 00:11:42.799 00:11:42.799 ' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:42.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.799 --rc genhtml_branch_coverage=1 00:11:42.799 --rc genhtml_function_coverage=1 00:11:42.799 --rc genhtml_legend=1 00:11:42.799 --rc geninfo_all_blocks=1 00:11:42.799 --rc geninfo_unexecuted_blocks=1 00:11:42.799 00:11:42.799 ' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:42.799 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.058 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:49.728 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:49.728 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.728 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:49.729 Found net devices under 0000:86:00.0: cvl_0_0 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:49.729 Found net devices under 0000:86:00.1: cvl_0_1 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.729 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:11:49.729 00:11:49.729 --- 10.0.0.2 ping statistics --- 00:11:49.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.729 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:11:49.729 00:11:49.729 --- 10.0.0.1 ping statistics --- 00:11:49.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.729 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1605629 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1605629 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1605629 ']' 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.729 [2024-11-19 10:38:56.240829] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:11:49.729 [2024-11-19 10:38:56.240874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.729 [2024-11-19 10:38:56.320518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.729 [2024-11-19 10:38:56.363240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.729 [2024-11-19 10:38:56.363279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.729 [2024-11-19 10:38:56.363287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.729 [2024-11-19 10:38:56.363293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.729 [2024-11-19 10:38:56.363297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.729 [2024-11-19 10:38:56.364869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.729 [2024-11-19 10:38:56.364996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.729 [2024-11-19 10:38:56.365038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.729 [2024-11-19 10:38:56.365039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.729 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 [2024-11-19 10:38:56.502886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 Null1 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 [2024-11-19 10:38:56.552422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 Null2 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 Null3 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 Null4 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.730 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:49.730 00:11:49.730 Discovery Log Number of Records 6, Generation counter 6 00:11:49.730 =====Discovery Log Entry 0====== 00:11:49.730 trtype: tcp 00:11:49.730 adrfam: ipv4 00:11:49.730 subtype: current discovery subsystem 00:11:49.730 treq: not required 00:11:49.730 portid: 0 00:11:49.730 trsvcid: 4420 00:11:49.731 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:49.731 traddr: 10.0.0.2 00:11:49.731 eflags: explicit discovery connections, duplicate discovery information 00:11:49.731 sectype: none 00:11:49.731 =====Discovery Log Entry 1====== 00:11:49.731 trtype: tcp 00:11:49.731 adrfam: ipv4 00:11:49.731 subtype: nvme subsystem 00:11:49.731 treq: not required 00:11:49.731 portid: 0 00:11:49.731 trsvcid: 4420 00:11:49.731 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:49.731 traddr: 10.0.0.2 00:11:49.731 eflags: none 00:11:49.731 sectype: none 00:11:49.731 =====Discovery Log Entry 2====== 00:11:49.731 trtype: tcp 00:11:49.731 adrfam: ipv4 00:11:49.731 subtype: nvme subsystem 00:11:49.731 treq: not required 00:11:49.731 portid: 0 00:11:49.731 trsvcid: 4420 00:11:49.731 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:49.731 traddr: 10.0.0.2 00:11:49.731 eflags: none 00:11:49.731 sectype: none 00:11:49.731 =====Discovery Log Entry 3====== 00:11:49.731 trtype: tcp 00:11:49.731 adrfam: ipv4 00:11:49.731 subtype: nvme subsystem 00:11:49.731 treq: not required 00:11:49.731 portid: 0 00:11:49.731 trsvcid: 4420 00:11:49.731 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:49.731 traddr: 10.0.0.2 00:11:49.731 eflags: none 00:11:49.731 sectype: none 00:11:49.731 =====Discovery Log Entry 4====== 00:11:49.731 trtype: tcp 00:11:49.731 adrfam: ipv4 00:11:49.731 subtype: nvme subsystem 00:11:49.731 treq: not required 00:11:49.731 portid: 0 00:11:49.731 trsvcid: 4420 00:11:49.731 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:49.731 traddr: 10.0.0.2 00:11:49.731 eflags: none 00:11:49.731 sectype: none 00:11:49.731 =====Discovery Log Entry 5====== 00:11:49.731 trtype: tcp 00:11:49.731 adrfam: ipv4 00:11:49.731 subtype: discovery subsystem referral 00:11:49.731 treq: not required 00:11:49.731 portid: 0 00:11:49.731 trsvcid: 4430 00:11:49.731 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:49.731 traddr: 10.0.0.2 00:11:49.731 eflags: none 00:11:49.731 sectype: none 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:49.731 Perform nvmf subsystem discovery via RPC 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 [ 00:11:49.731 { 00:11:49.731 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:49.731 "subtype": "Discovery", 00:11:49.731 "listen_addresses": [ 00:11:49.731 { 00:11:49.731 "trtype": "TCP", 00:11:49.731 "adrfam": "IPv4", 00:11:49.731 "traddr": "10.0.0.2", 00:11:49.731 "trsvcid": "4420" 00:11:49.731 } 00:11:49.731 ], 00:11:49.731 "allow_any_host": true, 00:11:49.731 "hosts": [] 00:11:49.731 }, 00:11:49.731 { 00:11:49.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.731 "subtype": "NVMe", 00:11:49.731 "listen_addresses": [ 00:11:49.731 { 00:11:49.731 "trtype": "TCP", 00:11:49.731 "adrfam": "IPv4", 00:11:49.731 "traddr": "10.0.0.2", 00:11:49.731 "trsvcid": "4420" 00:11:49.731 } 00:11:49.731 ], 00:11:49.731 "allow_any_host": true, 00:11:49.731 "hosts": [], 00:11:49.731 "serial_number": "SPDK00000000000001", 00:11:49.731 "model_number": "SPDK bdev Controller", 00:11:49.731 "max_namespaces": 32, 00:11:49.731 "min_cntlid": 1, 00:11:49.731 "max_cntlid": 65519, 00:11:49.731 "namespaces": [ 00:11:49.731 { 00:11:49.731 "nsid": 1, 00:11:49.731 "bdev_name": "Null1", 00:11:49.731 "name": "Null1", 00:11:49.731 "nguid": "E2EACB63EB254624916CEA16E4465C82", 00:11:49.731 "uuid": "e2eacb63-eb25-4624-916c-ea16e4465c82" 00:11:49.731 } 00:11:49.731 ] 00:11:49.731 }, 00:11:49.731 { 00:11:49.731 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:49.731 "subtype": "NVMe", 00:11:49.731 "listen_addresses": [ 00:11:49.731 { 00:11:49.731 "trtype": "TCP", 00:11:49.731 "adrfam": "IPv4", 00:11:49.731 "traddr": "10.0.0.2", 00:11:49.731 "trsvcid": "4420" 00:11:49.731 } 00:11:49.731 ], 00:11:49.731 "allow_any_host": true, 00:11:49.731 "hosts": [], 00:11:49.731 "serial_number": "SPDK00000000000002", 00:11:49.731 "model_number": "SPDK bdev Controller", 00:11:49.731 "max_namespaces": 32, 00:11:49.731 "min_cntlid": 1, 00:11:49.731 "max_cntlid": 65519, 00:11:49.731 "namespaces": [ 00:11:49.731 { 00:11:49.731 "nsid": 1, 00:11:49.731 "bdev_name": "Null2", 00:11:49.731 "name": "Null2", 00:11:49.731 "nguid": "179AE7090FE74A9FAF4D95F6C8E11066", 00:11:49.731 "uuid": "179ae709-0fe7-4a9f-af4d-95f6c8e11066" 00:11:49.731 } 00:11:49.731 ] 00:11:49.731 }, 00:11:49.731 { 00:11:49.731 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:49.731 "subtype": "NVMe", 00:11:49.731 "listen_addresses": [ 00:11:49.731 { 00:11:49.731 "trtype": "TCP", 00:11:49.731 "adrfam": "IPv4", 00:11:49.731 "traddr": "10.0.0.2", 00:11:49.731 "trsvcid": "4420" 00:11:49.731 } 00:11:49.731 ], 00:11:49.731 "allow_any_host": true, 00:11:49.731 "hosts": [], 00:11:49.731 "serial_number": "SPDK00000000000003", 00:11:49.731 "model_number": "SPDK bdev Controller", 00:11:49.731 "max_namespaces": 32, 00:11:49.731 "min_cntlid": 1, 00:11:49.731 "max_cntlid": 65519, 00:11:49.731 "namespaces": [ 00:11:49.731 { 00:11:49.731 "nsid": 1, 00:11:49.731 "bdev_name": "Null3", 00:11:49.731 "name": "Null3", 00:11:49.731 "nguid": "C2ED5DE58B55478CACCDB513D2C5B2F7", 00:11:49.731 "uuid": "c2ed5de5-8b55-478c-accd-b513d2c5b2f7" 00:11:49.731 } 00:11:49.731 ] 00:11:49.731 }, 00:11:49.731 { 00:11:49.731 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:49.731 "subtype": "NVMe", 00:11:49.731 "listen_addresses": [ 00:11:49.731 { 00:11:49.731 "trtype": "TCP", 00:11:49.731 "adrfam": "IPv4", 00:11:49.731 "traddr": "10.0.0.2", 00:11:49.731 "trsvcid": "4420" 00:11:49.731 } 00:11:49.731 ], 00:11:49.731 "allow_any_host": true, 00:11:49.731 "hosts": [], 00:11:49.731 "serial_number": "SPDK00000000000004", 00:11:49.731 "model_number": "SPDK bdev Controller", 00:11:49.731 "max_namespaces": 32, 00:11:49.731 "min_cntlid": 1, 00:11:49.731 "max_cntlid": 65519, 00:11:49.731 "namespaces": [ 00:11:49.731 { 00:11:49.731 "nsid": 1, 00:11:49.731 "bdev_name": "Null4", 00:11:49.731 "name": "Null4", 00:11:49.731 "nguid": "E28227FA88E04A439E02B51EC06F27A0", 00:11:49.731 "uuid": "e28227fa-88e0-4a43-9e02-b51ec06f27a0" 00:11:49.731 } 00:11:49.731 ] 00:11:49.731 } 00:11:49.731 ] 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.732 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.732 rmmod nvme_tcp 00:11:49.732 rmmod nvme_fabrics 00:11:49.732 rmmod nvme_keyring 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1605629 ']' 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1605629 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1605629 ']' 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1605629 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605629 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605629' 00:11:49.732 killing process with pid 1605629 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1605629 00:11:49.732 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1605629 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.991 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.894 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.894 00:11:51.894 real 0m9.253s 00:11:51.894 user 0m5.312s 00:11:51.894 sys 0m4.832s 00:11:51.894 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.894 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.894 ************************************ 00:11:51.894 END TEST nvmf_target_discovery 00:11:51.894 ************************************ 00:11:51.894 10:38:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:51.894 10:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.894 10:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.894 10:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.154 ************************************ 00:11:52.154 START TEST nvmf_referrals 00:11:52.154 ************************************ 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:52.154 * Looking for test storage... 00:11:52.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.154 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.155 --rc genhtml_branch_coverage=1 00:11:52.155 --rc genhtml_function_coverage=1 00:11:52.155 --rc genhtml_legend=1 00:11:52.155 --rc geninfo_all_blocks=1 00:11:52.155 --rc geninfo_unexecuted_blocks=1 00:11:52.155 00:11:52.155 ' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.155 --rc genhtml_branch_coverage=1 00:11:52.155 --rc genhtml_function_coverage=1 00:11:52.155 --rc genhtml_legend=1 00:11:52.155 --rc geninfo_all_blocks=1 00:11:52.155 --rc geninfo_unexecuted_blocks=1 00:11:52.155 00:11:52.155 ' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.155 --rc genhtml_branch_coverage=1 00:11:52.155 --rc genhtml_function_coverage=1 00:11:52.155 --rc genhtml_legend=1 00:11:52.155 --rc geninfo_all_blocks=1 00:11:52.155 --rc geninfo_unexecuted_blocks=1 00:11:52.155 00:11:52.155 ' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.155 --rc genhtml_branch_coverage=1 00:11:52.155 --rc genhtml_function_coverage=1 00:11:52.155 --rc genhtml_legend=1 00:11:52.155 --rc geninfo_all_blocks=1 00:11:52.155 --rc geninfo_unexecuted_blocks=1 00:11:52.155 00:11:52.155 ' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.155 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.156 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.156 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.156 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.156 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.156 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.156 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.156 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:58.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:58.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:58.723 Found net devices under 0000:86:00.0: cvl_0_0 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:58.723 Found net devices under 0000:86:00.1: cvl_0_1 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:58.723 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:58.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:11:58.724 00:11:58.724 --- 10.0.0.2 ping statistics --- 00:11:58.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.724 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:58.724 00:11:58.724 --- 10.0.0.1 ping statistics --- 00:11:58.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.724 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1609695 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1609695 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1609695 ']' 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 [2024-11-19 10:39:05.622446] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:11:58.724 [2024-11-19 10:39:05.622496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.724 [2024-11-19 10:39:05.701368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.724 [2024-11-19 10:39:05.745200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.724 [2024-11-19 10:39:05.745238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.724 [2024-11-19 10:39:05.745245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.724 [2024-11-19 10:39:05.745251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.724 [2024-11-19 10:39:05.745256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.724 [2024-11-19 10:39:05.746876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.724 [2024-11-19 10:39:05.747020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.724 [2024-11-19 10:39:05.747053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.724 [2024-11-19 10:39:05.747054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 [2024-11-19 10:39:05.884807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 [2024-11-19 10:39:05.898142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:58.724 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.725 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:58.725 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.725 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.725 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:58.725 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:58.725 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:58.725 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.725 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.725 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.725 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.725 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.983 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.241 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.499 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:59.757 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:59.757 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.758 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.017 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:00.017 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:00.017 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:00.017 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:00.017 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:00.017 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.017 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.276 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.534 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:00.534 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:00.534 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.534 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.534 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.534 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.534 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.535 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.535 rmmod nvme_tcp 00:12:00.535 rmmod nvme_fabrics 00:12:00.794 rmmod nvme_keyring 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1609695 ']' 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1609695 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1609695 ']' 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1609695 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1609695 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1609695' 00:12:00.794 killing process with pid 1609695 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1609695 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1609695 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.794 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.330 00:12:03.330 real 0m10.927s 00:12:03.330 user 0m12.499s 00:12:03.330 sys 0m5.220s 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.330 ************************************ 00:12:03.330 END TEST nvmf_referrals 00:12:03.330 ************************************ 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.330 ************************************ 00:12:03.330 START TEST nvmf_connect_disconnect 00:12:03.330 ************************************ 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:03.330 * Looking for test storage... 00:12:03.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.330 --rc genhtml_branch_coverage=1 00:12:03.330 --rc genhtml_function_coverage=1 00:12:03.330 --rc genhtml_legend=1 00:12:03.330 --rc geninfo_all_blocks=1 00:12:03.330 --rc geninfo_unexecuted_blocks=1 00:12:03.330 00:12:03.330 ' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.330 --rc genhtml_branch_coverage=1 00:12:03.330 --rc genhtml_function_coverage=1 00:12:03.330 --rc genhtml_legend=1 00:12:03.330 --rc geninfo_all_blocks=1 00:12:03.330 --rc geninfo_unexecuted_blocks=1 00:12:03.330 00:12:03.330 ' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.330 --rc genhtml_branch_coverage=1 00:12:03.330 --rc genhtml_function_coverage=1 00:12:03.330 --rc genhtml_legend=1 00:12:03.330 --rc geninfo_all_blocks=1 00:12:03.330 --rc geninfo_unexecuted_blocks=1 00:12:03.330 00:12:03.330 ' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.330 --rc genhtml_branch_coverage=1 00:12:03.330 --rc genhtml_function_coverage=1 00:12:03.330 --rc genhtml_legend=1 00:12:03.330 --rc geninfo_all_blocks=1 00:12:03.330 --rc geninfo_unexecuted_blocks=1 00:12:03.330 00:12:03.330 ' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.330 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.331 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.897 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:09.898 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:09.898 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:09.898 Found net devices under 0000:86:00.0: cvl_0_0 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:09.898 Found net devices under 0000:86:00.1: cvl_0_1 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:12:09.898 00:12:09.898 --- 10.0.0.2 ping statistics --- 00:12:09.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.898 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:12:09.898 00:12:09.898 --- 10.0.0.1 ping statistics --- 00:12:09.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.898 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.898 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1613995 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1613995 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1613995 ']' 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.899 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.899 [2024-11-19 10:39:16.671061] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:12:09.899 [2024-11-19 10:39:16.671115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.899 [2024-11-19 10:39:16.753092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.899 [2024-11-19 10:39:16.796809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.899 [2024-11-19 10:39:16.796846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.899 [2024-11-19 10:39:16.796854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.899 [2024-11-19 10:39:16.796860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.899 [2024-11-19 10:39:16.796864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.899 [2024-11-19 10:39:16.798444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.899 [2024-11-19 10:39:16.798556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.899 [2024-11-19 10:39:16.798660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.899 [2024-11-19 10:39:16.798661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.158 [2024-11-19 10:39:17.550219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.158 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.416 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.416 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.416 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.416 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.416 [2024-11-19 10:39:17.618045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.416 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.416 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:10.416 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:10.416 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:13.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.833 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:26.833 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:26.833 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.833 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:26.833 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.833 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:26.833 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.833 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.833 rmmod nvme_tcp 00:12:26.833 rmmod nvme_fabrics 00:12:26.833 rmmod nvme_keyring 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1613995 ']' 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1613995 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1613995 ']' 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1613995 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613995 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613995' 00:12:26.833 killing process with pid 1613995 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1613995 00:12:26.833 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1613995 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.092 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.995 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.995 00:12:28.995 real 0m25.986s 00:12:28.995 user 1m11.430s 00:12:28.995 sys 0m5.792s 00:12:28.995 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.995 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.995 ************************************ 00:12:28.995 END TEST nvmf_connect_disconnect 00:12:28.995 ************************************ 00:12:28.995 10:39:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:28.995 10:39:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.995 10:39:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.995 10:39:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.995 ************************************ 00:12:28.995 START TEST nvmf_multitarget 00:12:28.995 ************************************ 00:12:28.995 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.255 * Looking for test storage... 00:12:29.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.255 --rc genhtml_branch_coverage=1 00:12:29.255 --rc genhtml_function_coverage=1 00:12:29.255 --rc genhtml_legend=1 00:12:29.255 --rc geninfo_all_blocks=1 00:12:29.255 --rc geninfo_unexecuted_blocks=1 00:12:29.255 00:12:29.255 ' 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.255 --rc genhtml_branch_coverage=1 00:12:29.255 --rc genhtml_function_coverage=1 00:12:29.255 --rc genhtml_legend=1 00:12:29.255 --rc geninfo_all_blocks=1 00:12:29.255 --rc geninfo_unexecuted_blocks=1 00:12:29.255 00:12:29.255 ' 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.255 --rc genhtml_branch_coverage=1 00:12:29.255 --rc genhtml_function_coverage=1 00:12:29.255 --rc genhtml_legend=1 00:12:29.255 --rc geninfo_all_blocks=1 00:12:29.255 --rc geninfo_unexecuted_blocks=1 00:12:29.255 00:12:29.255 ' 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.255 --rc genhtml_branch_coverage=1 00:12:29.255 --rc genhtml_function_coverage=1 00:12:29.255 --rc genhtml_legend=1 00:12:29.255 --rc geninfo_all_blocks=1 00:12:29.255 --rc geninfo_unexecuted_blocks=1 00:12:29.255 00:12:29.255 ' 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.255 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.256 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.821 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:35.822 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:35.822 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:35.822 Found net devices under 0000:86:00.0: cvl_0_0 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:35.822 Found net devices under 0000:86:00.1: cvl_0_1 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.822 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:12:35.823 00:12:35.823 --- 10.0.0.2 ping statistics --- 00:12:35.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.823 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:12:35.823 00:12:35.823 --- 10.0.0.1 ping statistics --- 00:12:35.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.823 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1620390 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1620390 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1620390 ']' 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.823 [2024-11-19 10:39:42.656939] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:12:35.823 [2024-11-19 10:39:42.656988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.823 [2024-11-19 10:39:42.736425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.823 [2024-11-19 10:39:42.779462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.823 [2024-11-19 10:39:42.779499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.823 [2024-11-19 10:39:42.779506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.823 [2024-11-19 10:39:42.779512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.823 [2024-11-19 10:39:42.779517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.823 [2024-11-19 10:39:42.781130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.823 [2024-11-19 10:39:42.781235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.823 [2024-11-19 10:39:42.781343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.823 [2024-11-19 10:39:42.781344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.823 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:35.823 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:35.823 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:35.823 "nvmf_tgt_1" 00:12:35.823 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:35.823 "nvmf_tgt_2" 00:12:35.823 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.823 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:36.081 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:36.081 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:36.081 true 00:12:36.081 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:36.339 true 00:12:36.339 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:36.339 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:36.339 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:36.339 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:36.339 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.340 rmmod nvme_tcp 00:12:36.340 rmmod nvme_fabrics 00:12:36.340 rmmod nvme_keyring 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1620390 ']' 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1620390 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1620390 ']' 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1620390 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.340 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1620390 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1620390' 00:12:36.599 killing process with pid 1620390 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1620390 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1620390 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.599 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.133 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.133 00:12:39.133 real 0m9.607s 00:12:39.133 user 0m7.237s 00:12:39.133 sys 0m4.917s 00:12:39.133 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.133 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.133 ************************************ 00:12:39.133 END TEST nvmf_multitarget 00:12:39.133 ************************************ 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.134 ************************************ 00:12:39.134 START TEST nvmf_rpc 00:12:39.134 ************************************ 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:39.134 * Looking for test storage... 00:12:39.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:39.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.134 --rc genhtml_branch_coverage=1 00:12:39.134 --rc genhtml_function_coverage=1 00:12:39.134 --rc genhtml_legend=1 00:12:39.134 --rc geninfo_all_blocks=1 00:12:39.134 --rc geninfo_unexecuted_blocks=1 00:12:39.134 00:12:39.134 ' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:39.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.134 --rc genhtml_branch_coverage=1 00:12:39.134 --rc genhtml_function_coverage=1 00:12:39.134 --rc genhtml_legend=1 00:12:39.134 --rc geninfo_all_blocks=1 00:12:39.134 --rc geninfo_unexecuted_blocks=1 00:12:39.134 00:12:39.134 ' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:39.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.134 --rc genhtml_branch_coverage=1 00:12:39.134 --rc genhtml_function_coverage=1 00:12:39.134 --rc genhtml_legend=1 00:12:39.134 --rc geninfo_all_blocks=1 00:12:39.134 --rc geninfo_unexecuted_blocks=1 00:12:39.134 00:12:39.134 ' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:39.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.134 --rc genhtml_branch_coverage=1 00:12:39.134 --rc genhtml_function_coverage=1 00:12:39.134 --rc genhtml_legend=1 00:12:39.134 --rc geninfo_all_blocks=1 00:12:39.134 --rc geninfo_unexecuted_blocks=1 00:12:39.134 00:12:39.134 ' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.134 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.135 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:45.705 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:45.705 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.705 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:45.706 Found net devices under 0000:86:00.0: cvl_0_0 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:45.706 Found net devices under 0000:86:00.1: cvl_0_1 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.706 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:12:45.706 00:12:45.706 --- 10.0.0.2 ping statistics --- 00:12:45.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.706 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:45.706 00:12:45.706 --- 10.0.0.1 ping statistics --- 00:12:45.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.706 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1624178 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1624178 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1624178 ']' 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.706 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.706 [2024-11-19 10:39:52.331874] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:12:45.706 [2024-11-19 10:39:52.331925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.706 [2024-11-19 10:39:52.412534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.706 [2024-11-19 10:39:52.455633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.706 [2024-11-19 10:39:52.455669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.707 [2024-11-19 10:39:52.455676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.707 [2024-11-19 10:39:52.455683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.707 [2024-11-19 10:39:52.455688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.707 [2024-11-19 10:39:52.457236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.707 [2024-11-19 10:39:52.457327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.707 [2024-11-19 10:39:52.457436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.707 [2024-11-19 10:39:52.457437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:45.707 "tick_rate": 2300000000, 00:12:45.707 "poll_groups": [ 00:12:45.707 { 00:12:45.707 "name": "nvmf_tgt_poll_group_000", 00:12:45.707 "admin_qpairs": 0, 00:12:45.707 "io_qpairs": 0, 00:12:45.707 "current_admin_qpairs": 0, 00:12:45.707 "current_io_qpairs": 0, 00:12:45.707 "pending_bdev_io": 0, 00:12:45.707 "completed_nvme_io": 0, 00:12:45.707 "transports": [] 00:12:45.707 }, 00:12:45.707 { 00:12:45.707 "name": "nvmf_tgt_poll_group_001", 00:12:45.707 "admin_qpairs": 0, 00:12:45.707 "io_qpairs": 0, 00:12:45.707 "current_admin_qpairs": 0, 00:12:45.707 "current_io_qpairs": 0, 00:12:45.707 "pending_bdev_io": 0, 00:12:45.707 "completed_nvme_io": 0, 00:12:45.707 "transports": [] 00:12:45.707 }, 00:12:45.707 { 00:12:45.707 "name": "nvmf_tgt_poll_group_002", 00:12:45.707 "admin_qpairs": 0, 00:12:45.707 "io_qpairs": 0, 00:12:45.707 "current_admin_qpairs": 0, 00:12:45.707 "current_io_qpairs": 0, 00:12:45.707 "pending_bdev_io": 0, 00:12:45.707 "completed_nvme_io": 0, 00:12:45.707 "transports": [] 00:12:45.707 }, 00:12:45.707 { 00:12:45.707 "name": "nvmf_tgt_poll_group_003", 00:12:45.707 "admin_qpairs": 0, 00:12:45.707 "io_qpairs": 0, 00:12:45.707 "current_admin_qpairs": 0, 00:12:45.707 "current_io_qpairs": 0, 00:12:45.707 "pending_bdev_io": 0, 00:12:45.707 "completed_nvme_io": 0, 00:12:45.707 "transports": [] 00:12:45.707 } 00:12:45.707 ] 00:12:45.707 }' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.707 [2024-11-19 10:39:52.703439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:45.707 "tick_rate": 2300000000, 00:12:45.707 "poll_groups": [ 00:12:45.707 { 00:12:45.707 "name": "nvmf_tgt_poll_group_000", 00:12:45.707 "admin_qpairs": 0, 00:12:45.707 "io_qpairs": 0, 00:12:45.707 "current_admin_qpairs": 0, 00:12:45.707 "current_io_qpairs": 0, 00:12:45.707 "pending_bdev_io": 0, 00:12:45.707 "completed_nvme_io": 0, 00:12:45.707 "transports": [ 00:12:45.707 { 00:12:45.707 "trtype": "TCP" 00:12:45.707 } 00:12:45.707 ] 00:12:45.707 }, 00:12:45.707 { 00:12:45.707 "name": "nvmf_tgt_poll_group_001", 00:12:45.707 "admin_qpairs": 0, 00:12:45.707 "io_qpairs": 0, 00:12:45.707 "current_admin_qpairs": 0, 00:12:45.707 "current_io_qpairs": 0, 00:12:45.707 "pending_bdev_io": 0, 00:12:45.707 "completed_nvme_io": 0, 00:12:45.707 "transports": [ 00:12:45.707 { 00:12:45.707 "trtype": "TCP" 00:12:45.707 } 00:12:45.707 ] 00:12:45.707 }, 00:12:45.707 { 00:12:45.707 "name": "nvmf_tgt_poll_group_002", 00:12:45.707 "admin_qpairs": 0, 00:12:45.707 "io_qpairs": 0, 00:12:45.707 "current_admin_qpairs": 0, 00:12:45.707 "current_io_qpairs": 0, 00:12:45.707 "pending_bdev_io": 0, 00:12:45.707 "completed_nvme_io": 0, 00:12:45.707 "transports": [ 00:12:45.707 { 00:12:45.707 "trtype": "TCP" 00:12:45.707 } 00:12:45.707 ] 00:12:45.707 }, 00:12:45.707 { 00:12:45.707 "name": "nvmf_tgt_poll_group_003", 00:12:45.707 "admin_qpairs": 0, 00:12:45.707 "io_qpairs": 0, 00:12:45.707 "current_admin_qpairs": 0, 00:12:45.707 "current_io_qpairs": 0, 00:12:45.707 "pending_bdev_io": 0, 00:12:45.707 "completed_nvme_io": 0, 00:12:45.707 "transports": [ 00:12:45.707 { 00:12:45.707 "trtype": "TCP" 00:12:45.707 } 00:12:45.707 ] 00:12:45.707 } 00:12:45.707 ] 00:12:45.707 }' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.707 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.708 Malloc1 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.708 [2024-11-19 10:39:52.883792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:45.708 [2024-11-19 10:39:52.912345] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:45.708 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:45.708 could not add new controller: failed to write to nvme-fabrics device 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.708 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.644 10:39:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.644 10:39:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:46.644 10:39:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.644 10:39:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:46.644 10:39:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.174 [2024-11-19 10:39:56.198427] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:49.174 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.174 could not add new controller: failed to write to nvme-fabrics device 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.174 10:39:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.108 10:39:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.108 10:39:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:50.108 10:39:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.108 10:39:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:50.108 10:39:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:52.006 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:52.006 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:52.006 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.006 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:52.006 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.006 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:52.006 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.264 [2024-11-19 10:39:59.606246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.264 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.669 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.669 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.669 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.669 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:53.669 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.642 [2024-11-19 10:40:02.906324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.642 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.643 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.576 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.576 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.576 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.576 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:56.576 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.105 [2024-11-19 10:40:06.211424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.105 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.039 10:40:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.039 10:40:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:00.039 10:40:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.039 10:40:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:00.039 10:40:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.934 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.934 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.934 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.934 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:01.934 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.934 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:01.934 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.192 [2024-11-19 10:40:09.476272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.192 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.193 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.193 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.193 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.193 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.193 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.564 10:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.564 10:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:03.564 10:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.564 10:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:03.564 10:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:05.461 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:05.461 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.462 [2024-11-19 10:40:12.788127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.462 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.834 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.834 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.834 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.834 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:06.834 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:08.732 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 [2024-11-19 10:40:16.059524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 [2024-11-19 10:40:16.107653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 [2024-11-19 10:40:16.155788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.732 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 [2024-11-19 10:40:16.203963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 [2024-11-19 10:40:16.252131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.990 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:08.991 "tick_rate": 2300000000, 00:13:08.991 "poll_groups": [ 00:13:08.991 { 00:13:08.991 "name": "nvmf_tgt_poll_group_000", 00:13:08.991 "admin_qpairs": 2, 00:13:08.991 "io_qpairs": 168, 00:13:08.991 "current_admin_qpairs": 0, 00:13:08.991 "current_io_qpairs": 0, 00:13:08.991 "pending_bdev_io": 0, 00:13:08.991 "completed_nvme_io": 219, 00:13:08.991 "transports": [ 00:13:08.991 { 00:13:08.991 "trtype": "TCP" 00:13:08.991 } 00:13:08.991 ] 00:13:08.991 }, 00:13:08.991 { 00:13:08.991 "name": "nvmf_tgt_poll_group_001", 00:13:08.991 "admin_qpairs": 2, 00:13:08.991 "io_qpairs": 168, 00:13:08.991 "current_admin_qpairs": 0, 00:13:08.991 "current_io_qpairs": 0, 00:13:08.991 "pending_bdev_io": 0, 00:13:08.991 "completed_nvme_io": 266, 00:13:08.991 "transports": [ 00:13:08.991 { 00:13:08.991 "trtype": "TCP" 00:13:08.991 } 00:13:08.991 ] 00:13:08.991 }, 00:13:08.991 { 00:13:08.991 "name": "nvmf_tgt_poll_group_002", 00:13:08.991 "admin_qpairs": 1, 00:13:08.991 "io_qpairs": 168, 00:13:08.991 "current_admin_qpairs": 0, 00:13:08.991 "current_io_qpairs": 0, 00:13:08.991 "pending_bdev_io": 0, 00:13:08.991 "completed_nvme_io": 267, 00:13:08.991 "transports": [ 00:13:08.991 { 00:13:08.991 "trtype": "TCP" 00:13:08.991 } 00:13:08.991 ] 00:13:08.991 }, 00:13:08.991 { 00:13:08.991 "name": "nvmf_tgt_poll_group_003", 00:13:08.991 "admin_qpairs": 2, 00:13:08.991 "io_qpairs": 168, 00:13:08.991 "current_admin_qpairs": 0, 00:13:08.991 "current_io_qpairs": 0, 00:13:08.991 "pending_bdev_io": 0, 00:13:08.991 "completed_nvme_io": 270, 00:13:08.991 "transports": [ 00:13:08.991 { 00:13:08.991 "trtype": "TCP" 00:13:08.991 } 00:13:08.991 ] 00:13:08.991 } 00:13:08.991 ] 00:13:08.991 }' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.991 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.991 rmmod nvme_tcp 00:13:08.991 rmmod nvme_fabrics 00:13:09.250 rmmod nvme_keyring 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1624178 ']' 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1624178 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1624178 ']' 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1624178 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1624178 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1624178' 00:13:09.250 killing process with pid 1624178 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1624178 00:13:09.250 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1624178 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.509 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.415 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.415 00:13:11.415 real 0m32.677s 00:13:11.415 user 1m38.376s 00:13:11.415 sys 0m6.487s 00:13:11.415 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.415 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.415 ************************************ 00:13:11.415 END TEST nvmf_rpc 00:13:11.415 ************************************ 00:13:11.415 10:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.415 10:40:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.415 10:40:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.415 10:40:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.415 ************************************ 00:13:11.415 START TEST nvmf_invalid 00:13:11.415 ************************************ 00:13:11.415 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.676 * Looking for test storage... 00:13:11.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.676 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:11.676 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:11.676 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:11.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.676 --rc genhtml_branch_coverage=1 00:13:11.676 --rc genhtml_function_coverage=1 00:13:11.676 --rc genhtml_legend=1 00:13:11.676 --rc geninfo_all_blocks=1 00:13:11.676 --rc geninfo_unexecuted_blocks=1 00:13:11.676 00:13:11.676 ' 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:11.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.676 --rc genhtml_branch_coverage=1 00:13:11.676 --rc genhtml_function_coverage=1 00:13:11.676 --rc genhtml_legend=1 00:13:11.676 --rc geninfo_all_blocks=1 00:13:11.676 --rc geninfo_unexecuted_blocks=1 00:13:11.676 00:13:11.676 ' 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:11.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.676 --rc genhtml_branch_coverage=1 00:13:11.676 --rc genhtml_function_coverage=1 00:13:11.676 --rc genhtml_legend=1 00:13:11.676 --rc geninfo_all_blocks=1 00:13:11.676 --rc geninfo_unexecuted_blocks=1 00:13:11.676 00:13:11.676 ' 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:11.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.676 --rc genhtml_branch_coverage=1 00:13:11.676 --rc genhtml_function_coverage=1 00:13:11.676 --rc genhtml_legend=1 00:13:11.676 --rc geninfo_all_blocks=1 00:13:11.676 --rc geninfo_unexecuted_blocks=1 00:13:11.676 00:13:11.676 ' 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.676 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.677 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:18.245 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:18.246 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:18.246 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:18.246 Found net devices under 0000:86:00.0: cvl_0_0 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:18.246 Found net devices under 0000:86:00.1: cvl_0_1 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:18.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:13:18.246 00:13:18.246 --- 10.0.0.2 ping statistics --- 00:13:18.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.246 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:13:18.246 00:13:18.246 --- 10.0.0.1 ping statistics --- 00:13:18.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.246 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.246 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.246 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1631792 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1631792 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1631792 ']' 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.247 [2024-11-19 10:40:25.077657] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:13:18.247 [2024-11-19 10:40:25.077701] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.247 [2024-11-19 10:40:25.155698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.247 [2024-11-19 10:40:25.196545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.247 [2024-11-19 10:40:25.196585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.247 [2024-11-19 10:40:25.196592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.247 [2024-11-19 10:40:25.196598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.247 [2024-11-19 10:40:25.196603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.247 [2024-11-19 10:40:25.198074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.247 [2024-11-19 10:40:25.198179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.247 [2024-11-19 10:40:25.198288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.247 [2024-11-19 10:40:25.198289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32406 00:13:18.247 [2024-11-19 10:40:25.507775] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:18.247 { 00:13:18.247 "nqn": "nqn.2016-06.io.spdk:cnode32406", 00:13:18.247 "tgt_name": "foobar", 00:13:18.247 "method": "nvmf_create_subsystem", 00:13:18.247 "req_id": 1 00:13:18.247 } 00:13:18.247 Got JSON-RPC error response 00:13:18.247 response: 00:13:18.247 { 00:13:18.247 "code": -32603, 00:13:18.247 "message": "Unable to find target foobar" 00:13:18.247 }' 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:18.247 { 00:13:18.247 "nqn": "nqn.2016-06.io.spdk:cnode32406", 00:13:18.247 "tgt_name": "foobar", 00:13:18.247 "method": "nvmf_create_subsystem", 00:13:18.247 "req_id": 1 00:13:18.247 } 00:13:18.247 Got JSON-RPC error response 00:13:18.247 response: 00:13:18.247 { 00:13:18.247 "code": -32603, 00:13:18.247 "message": "Unable to find target foobar" 00:13:18.247 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:18.247 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10051 00:13:18.505 [2024-11-19 10:40:25.716504] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10051: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:18.505 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:18.505 { 00:13:18.505 "nqn": "nqn.2016-06.io.spdk:cnode10051", 00:13:18.505 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.505 "method": "nvmf_create_subsystem", 00:13:18.505 "req_id": 1 00:13:18.505 } 00:13:18.505 Got JSON-RPC error response 00:13:18.505 response: 00:13:18.505 { 00:13:18.505 "code": -32602, 00:13:18.505 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.505 }' 00:13:18.505 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:18.505 { 00:13:18.505 "nqn": "nqn.2016-06.io.spdk:cnode10051", 00:13:18.505 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.505 "method": "nvmf_create_subsystem", 00:13:18.505 "req_id": 1 00:13:18.505 } 00:13:18.505 Got JSON-RPC error response 00:13:18.505 response: 00:13:18.505 { 00:13:18.505 "code": -32602, 00:13:18.505 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.505 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:18.505 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:18.505 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14107 00:13:18.505 [2024-11-19 10:40:25.933246] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14107: invalid model number 'SPDK_Controller' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:18.763 { 00:13:18.763 "nqn": "nqn.2016-06.io.spdk:cnode14107", 00:13:18.763 "model_number": "SPDK_Controller\u001f", 00:13:18.763 "method": "nvmf_create_subsystem", 00:13:18.763 "req_id": 1 00:13:18.763 } 00:13:18.763 Got JSON-RPC error response 00:13:18.763 response: 00:13:18.763 { 00:13:18.763 "code": -32602, 00:13:18.763 "message": "Invalid MN SPDK_Controller\u001f" 00:13:18.763 }' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:18.763 { 00:13:18.763 "nqn": "nqn.2016-06.io.spdk:cnode14107", 00:13:18.763 "model_number": "SPDK_Controller\u001f", 00:13:18.763 "method": "nvmf_create_subsystem", 00:13:18.763 "req_id": 1 00:13:18.763 } 00:13:18.763 Got JSON-RPC error response 00:13:18.763 response: 00:13:18.763 { 00:13:18.763 "code": -32602, 00:13:18.763 "message": "Invalid MN SPDK_Controller\u001f" 00:13:18.763 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:18.763 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '([Mhgb(ZTG6F\u|zobv' 00:13:18.764 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '([Mhgb(ZTG6F\u|zobv' nqn.2016-06.io.spdk:cnode23916 00:13:19.021 [2024-11-19 10:40:26.274451] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23916: invalid serial number '([Mhgb(ZTG6F\u|zobv' 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:19.021 { 00:13:19.021 "nqn": "nqn.2016-06.io.spdk:cnode23916", 00:13:19.021 "serial_number": "([Mhgb\u007f(ZTG6F\\u\u007f|zobv", 00:13:19.021 "method": "nvmf_create_subsystem", 00:13:19.021 "req_id": 1 00:13:19.021 } 00:13:19.021 Got JSON-RPC error response 00:13:19.021 response: 00:13:19.021 { 00:13:19.021 "code": -32602, 00:13:19.021 "message": "Invalid SN ([Mhgb\u007f(ZTG6F\\u\u007f|zobv" 00:13:19.021 }' 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:19.021 { 00:13:19.021 "nqn": "nqn.2016-06.io.spdk:cnode23916", 00:13:19.021 "serial_number": "([Mhgb\u007f(ZTG6F\\u\u007f|zobv", 00:13:19.021 "method": "nvmf_create_subsystem", 00:13:19.021 "req_id": 1 00:13:19.021 } 00:13:19.021 Got JSON-RPC error response 00:13:19.021 response: 00:13:19.021 { 00:13:19.021 "code": -32602, 00:13:19.021 "message": "Invalid SN ([Mhgb\u007f(ZTG6F\\u\u007f|zobv" 00:13:19.021 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.021 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.022 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:19.280 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=9fT;W}LH_ -^"#/bTcIeROg|W8<<>&W_V3'\''EY0CF' 00:13:19.281 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '=9fT;W}LH_ -^"#/bTcIeROg|W8<<>&W_V3'\''EY0CF' nqn.2016-06.io.spdk:cnode5452 00:13:19.539 [2024-11-19 10:40:26.740016] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5452: invalid model number '=9fT;W}LH_ -^"#/bTcIeROg|W8<<>&W_V3'EY0CF' 00:13:19.539 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:19.539 { 00:13:19.539 "nqn": "nqn.2016-06.io.spdk:cnode5452", 00:13:19.539 "model_number": "=9fT;W}LH_ -^\"#/bTcIeROg|W8<<>&W_V3'\''EY0CF", 00:13:19.539 "method": "nvmf_create_subsystem", 00:13:19.539 "req_id": 1 00:13:19.539 } 00:13:19.539 Got JSON-RPC error response 00:13:19.539 response: 00:13:19.539 { 00:13:19.539 "code": -32602, 00:13:19.539 "message": "Invalid MN =9fT;W}LH_ -^\"#/bTcIeROg|W8<<>&W_V3'\''EY0CF" 00:13:19.539 }' 00:13:19.539 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:19.539 { 00:13:19.539 "nqn": "nqn.2016-06.io.spdk:cnode5452", 00:13:19.539 "model_number": "=9fT;W}LH_ -^\"#/bTcIeROg|W8<<>&W_V3'EY0CF", 00:13:19.539 "method": "nvmf_create_subsystem", 00:13:19.539 "req_id": 1 00:13:19.539 } 00:13:19.539 Got JSON-RPC error response 00:13:19.539 response: 00:13:19.539 { 00:13:19.539 "code": -32602, 00:13:19.540 "message": "Invalid MN =9fT;W}LH_ -^\"#/bTcIeROg|W8<<>&W_V3'EY0CF" 00:13:19.540 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:19.540 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:19.540 [2024-11-19 10:40:26.944762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.540 10:40:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:19.798 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:19.798 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:19.798 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:19.798 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:19.798 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:20.057 [2024-11-19 10:40:27.374216] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:20.057 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:20.057 { 00:13:20.057 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:20.057 "listen_address": { 00:13:20.057 "trtype": "tcp", 00:13:20.057 "traddr": "", 00:13:20.057 "trsvcid": "4421" 00:13:20.057 }, 00:13:20.057 "method": "nvmf_subsystem_remove_listener", 00:13:20.057 "req_id": 1 00:13:20.057 } 00:13:20.057 Got JSON-RPC error response 00:13:20.057 response: 00:13:20.057 { 00:13:20.057 "code": -32602, 00:13:20.057 "message": "Invalid parameters" 00:13:20.057 }' 00:13:20.057 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:20.057 { 00:13:20.057 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:20.057 "listen_address": { 00:13:20.057 "trtype": "tcp", 00:13:20.057 "traddr": "", 00:13:20.057 "trsvcid": "4421" 00:13:20.057 }, 00:13:20.057 "method": "nvmf_subsystem_remove_listener", 00:13:20.057 "req_id": 1 00:13:20.057 } 00:13:20.057 Got JSON-RPC error response 00:13:20.057 response: 00:13:20.057 { 00:13:20.057 "code": -32602, 00:13:20.057 "message": "Invalid parameters" 00:13:20.057 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:20.057 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -i 0 00:13:20.315 [2024-11-19 10:40:27.578880] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: invalid cntlid range [0-65519] 00:13:20.315 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:20.315 { 00:13:20.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.315 "min_cntlid": 0, 00:13:20.315 "method": "nvmf_create_subsystem", 00:13:20.315 "req_id": 1 00:13:20.315 } 00:13:20.315 Got JSON-RPC error response 00:13:20.315 response: 00:13:20.315 { 00:13:20.315 "code": -32602, 00:13:20.315 "message": "Invalid cntlid range [0-65519]" 00:13:20.315 }' 00:13:20.315 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:20.315 { 00:13:20.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.315 "min_cntlid": 0, 00:13:20.315 "method": "nvmf_create_subsystem", 00:13:20.315 "req_id": 1 00:13:20.315 } 00:13:20.315 Got JSON-RPC error response 00:13:20.315 response: 00:13:20.315 { 00:13:20.315 "code": -32602, 00:13:20.315 "message": "Invalid cntlid range [0-65519]" 00:13:20.315 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.315 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7494 -i 65520 00:13:20.574 [2024-11-19 10:40:27.787585] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7494: invalid cntlid range [65520-65519] 00:13:20.574 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:20.574 { 00:13:20.574 "nqn": "nqn.2016-06.io.spdk:cnode7494", 00:13:20.574 "min_cntlid": 65520, 00:13:20.574 "method": "nvmf_create_subsystem", 00:13:20.574 "req_id": 1 00:13:20.574 } 00:13:20.574 Got JSON-RPC error response 00:13:20.574 response: 00:13:20.574 { 00:13:20.574 "code": -32602, 00:13:20.574 "message": "Invalid cntlid range [65520-65519]" 00:13:20.574 }' 00:13:20.574 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:20.574 { 00:13:20.574 "nqn": "nqn.2016-06.io.spdk:cnode7494", 00:13:20.574 "min_cntlid": 65520, 00:13:20.574 "method": "nvmf_create_subsystem", 00:13:20.574 "req_id": 1 00:13:20.574 } 00:13:20.574 Got JSON-RPC error response 00:13:20.574 response: 00:13:20.574 { 00:13:20.574 "code": -32602, 00:13:20.574 "message": "Invalid cntlid range [65520-65519]" 00:13:20.574 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.574 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32136 -I 0 00:13:20.574 [2024-11-19 10:40:27.992271] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32136: invalid cntlid range [1-0] 00:13:20.832 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:20.832 { 00:13:20.832 "nqn": "nqn.2016-06.io.spdk:cnode32136", 00:13:20.832 "max_cntlid": 0, 00:13:20.832 "method": "nvmf_create_subsystem", 00:13:20.832 "req_id": 1 00:13:20.832 } 00:13:20.832 Got JSON-RPC error response 00:13:20.832 response: 00:13:20.832 { 00:13:20.832 "code": -32602, 00:13:20.832 "message": "Invalid cntlid range [1-0]" 00:13:20.832 }' 00:13:20.832 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:20.832 { 00:13:20.832 "nqn": "nqn.2016-06.io.spdk:cnode32136", 00:13:20.832 "max_cntlid": 0, 00:13:20.832 "method": "nvmf_create_subsystem", 00:13:20.832 "req_id": 1 00:13:20.832 } 00:13:20.832 Got JSON-RPC error response 00:13:20.832 response: 00:13:20.832 { 00:13:20.832 "code": -32602, 00:13:20.832 "message": "Invalid cntlid range [1-0]" 00:13:20.832 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.832 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26581 -I 65520 00:13:20.832 [2024-11-19 10:40:28.196979] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26581: invalid cntlid range [1-65520] 00:13:20.832 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:20.832 { 00:13:20.833 "nqn": "nqn.2016-06.io.spdk:cnode26581", 00:13:20.833 "max_cntlid": 65520, 00:13:20.833 "method": "nvmf_create_subsystem", 00:13:20.833 "req_id": 1 00:13:20.833 } 00:13:20.833 Got JSON-RPC error response 00:13:20.833 response: 00:13:20.833 { 00:13:20.833 "code": -32602, 00:13:20.833 "message": "Invalid cntlid range [1-65520]" 00:13:20.833 }' 00:13:20.833 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:20.833 { 00:13:20.833 "nqn": "nqn.2016-06.io.spdk:cnode26581", 00:13:20.833 "max_cntlid": 65520, 00:13:20.833 "method": "nvmf_create_subsystem", 00:13:20.833 "req_id": 1 00:13:20.833 } 00:13:20.833 Got JSON-RPC error response 00:13:20.833 response: 00:13:20.833 { 00:13:20.833 "code": -32602, 00:13:20.833 "message": "Invalid cntlid range [1-65520]" 00:13:20.833 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.833 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13951 -i 6 -I 5 00:13:21.091 [2024-11-19 10:40:28.397701] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13951: invalid cntlid range [6-5] 00:13:21.091 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:21.091 { 00:13:21.091 "nqn": "nqn.2016-06.io.spdk:cnode13951", 00:13:21.091 "min_cntlid": 6, 00:13:21.091 "max_cntlid": 5, 00:13:21.091 "method": "nvmf_create_subsystem", 00:13:21.091 "req_id": 1 00:13:21.091 } 00:13:21.091 Got JSON-RPC error response 00:13:21.091 response: 00:13:21.091 { 00:13:21.091 "code": -32602, 00:13:21.091 "message": "Invalid cntlid range [6-5]" 00:13:21.091 }' 00:13:21.091 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:21.091 { 00:13:21.091 "nqn": "nqn.2016-06.io.spdk:cnode13951", 00:13:21.091 "min_cntlid": 6, 00:13:21.091 "max_cntlid": 5, 00:13:21.091 "method": "nvmf_create_subsystem", 00:13:21.091 "req_id": 1 00:13:21.091 } 00:13:21.091 Got JSON-RPC error response 00:13:21.091 response: 00:13:21.091 { 00:13:21.091 "code": -32602, 00:13:21.091 "message": "Invalid cntlid range [6-5]" 00:13:21.091 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.092 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:21.350 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:21.350 { 00:13:21.350 "name": "foobar", 00:13:21.350 "method": "nvmf_delete_target", 00:13:21.350 "req_id": 1 00:13:21.350 } 00:13:21.350 Got JSON-RPC error response 00:13:21.350 response: 00:13:21.350 { 00:13:21.350 "code": -32602, 00:13:21.350 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:21.350 }' 00:13:21.350 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:21.350 { 00:13:21.350 "name": "foobar", 00:13:21.350 "method": "nvmf_delete_target", 00:13:21.350 "req_id": 1 00:13:21.350 } 00:13:21.350 Got JSON-RPC error response 00:13:21.350 response: 00:13:21.350 { 00:13:21.350 "code": -32602, 00:13:21.350 "message": "The specified target doesn't exist, cannot delete it." 00:13:21.350 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:21.350 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.351 rmmod nvme_tcp 00:13:21.351 rmmod nvme_fabrics 00:13:21.351 rmmod nvme_keyring 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1631792 ']' 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1631792 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1631792 ']' 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1631792 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631792 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631792' 00:13:21.351 killing process with pid 1631792 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1631792 00:13:21.351 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1631792 00:13:21.609 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.610 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.513 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.513 00:13:23.513 real 0m12.036s 00:13:23.513 user 0m18.773s 00:13:23.513 sys 0m5.381s 00:13:23.513 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.513 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:23.513 ************************************ 00:13:23.513 END TEST nvmf_invalid 00:13:23.513 ************************************ 00:13:23.513 10:40:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:23.513 10:40:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:23.513 10:40:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.513 10:40:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.772 ************************************ 00:13:23.772 START TEST nvmf_connect_stress 00:13:23.772 ************************************ 00:13:23.772 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:23.772 * Looking for test storage... 00:13:23.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.772 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:23.772 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:23.772 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:23.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.773 --rc genhtml_branch_coverage=1 00:13:23.773 --rc genhtml_function_coverage=1 00:13:23.773 --rc genhtml_legend=1 00:13:23.773 --rc geninfo_all_blocks=1 00:13:23.773 --rc geninfo_unexecuted_blocks=1 00:13:23.773 00:13:23.773 ' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:23.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.773 --rc genhtml_branch_coverage=1 00:13:23.773 --rc genhtml_function_coverage=1 00:13:23.773 --rc genhtml_legend=1 00:13:23.773 --rc geninfo_all_blocks=1 00:13:23.773 --rc geninfo_unexecuted_blocks=1 00:13:23.773 00:13:23.773 ' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:23.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.773 --rc genhtml_branch_coverage=1 00:13:23.773 --rc genhtml_function_coverage=1 00:13:23.773 --rc genhtml_legend=1 00:13:23.773 --rc geninfo_all_blocks=1 00:13:23.773 --rc geninfo_unexecuted_blocks=1 00:13:23.773 00:13:23.773 ' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:23.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.773 --rc genhtml_branch_coverage=1 00:13:23.773 --rc genhtml_function_coverage=1 00:13:23.773 --rc genhtml_legend=1 00:13:23.773 --rc geninfo_all_blocks=1 00:13:23.773 --rc geninfo_unexecuted_blocks=1 00:13:23.773 00:13:23.773 ' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.773 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:23.774 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:30.345 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:30.345 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:30.345 Found net devices under 0000:86:00.0: cvl_0_0 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.345 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:30.346 Found net devices under 0000:86:00.1: cvl_0_1 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.346 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:13:30.346 00:13:30.346 --- 10.0.0.2 ping statistics --- 00:13:30.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.346 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:13:30.346 00:13:30.346 --- 10.0.0.1 ping statistics --- 00:13:30.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.346 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1636077 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1636077 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1636077 ']' 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.346 [2024-11-19 10:40:37.223750] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:13:30.346 [2024-11-19 10:40:37.223797] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.346 [2024-11-19 10:40:37.301315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.346 [2024-11-19 10:40:37.343326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.346 [2024-11-19 10:40:37.343365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.346 [2024-11-19 10:40:37.343374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.346 [2024-11-19 10:40:37.343380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.346 [2024-11-19 10:40:37.343385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.346 [2024-11-19 10:40:37.344770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.346 [2024-11-19 10:40:37.344875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.346 [2024-11-19 10:40:37.344876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.346 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.347 [2024-11-19 10:40:37.480389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.347 [2024-11-19 10:40:37.500605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.347 NULL1 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1636202 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.347 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.606 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.606 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:30.606 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.606 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.606 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.863 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.863 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:30.863 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.863 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.863 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.121 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.121 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:31.121 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.121 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.121 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.687 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.687 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:31.687 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.687 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.688 10:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.945 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.945 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:31.945 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.945 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.945 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.203 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.203 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:32.203 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.203 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.203 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.462 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.462 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:32.462 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.462 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.462 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.029 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.029 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:33.029 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.029 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.030 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.289 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.289 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:33.289 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.289 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.289 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.548 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.548 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:33.548 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.548 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.548 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.807 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.807 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:33.807 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.807 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.807 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.066 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.067 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:34.067 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.067 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.067 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.635 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.635 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:34.635 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.635 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.635 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.895 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.895 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:34.895 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.895 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.895 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.155 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.155 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:35.155 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.155 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.155 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.414 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.414 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:35.414 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.414 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.414 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.673 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.673 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:35.673 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.673 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.673 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.240 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.240 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:36.240 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.240 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.240 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.498 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.498 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:36.498 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.498 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.498 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.757 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.757 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:36.757 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.757 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.757 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.015 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.015 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:37.015 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.015 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.015 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.583 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.583 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:37.583 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.583 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.583 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.841 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.841 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:37.841 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.841 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.841 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.099 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.099 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:38.099 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.099 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.099 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.359 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.359 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:38.359 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.359 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.359 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.618 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.618 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:38.618 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.618 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.618 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.184 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.184 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:39.184 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.184 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.184 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.442 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.442 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:39.442 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.442 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.442 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.700 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.700 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:39.700 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.700 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.700 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.958 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.958 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:39.958 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.958 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.958 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.227 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1636202 00:13:40.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1636202) - No such process 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1636202 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.494 rmmod nvme_tcp 00:13:40.494 rmmod nvme_fabrics 00:13:40.494 rmmod nvme_keyring 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1636077 ']' 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1636077 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1636077 ']' 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1636077 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1636077 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1636077' 00:13:40.494 killing process with pid 1636077 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1636077 00:13:40.494 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1636077 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.753 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.657 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.657 00:13:42.657 real 0m19.076s 00:13:42.657 user 0m39.553s 00:13:42.657 sys 0m8.440s 00:13:42.657 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.657 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.657 ************************************ 00:13:42.657 END TEST nvmf_connect_stress 00:13:42.657 ************************************ 00:13:42.657 10:40:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:42.657 10:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.657 10:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.657 10:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.917 ************************************ 00:13:42.917 START TEST nvmf_fused_ordering 00:13:42.917 ************************************ 00:13:42.917 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:42.917 * Looking for test storage... 00:13:42.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.917 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.917 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.918 --rc genhtml_branch_coverage=1 00:13:42.918 --rc genhtml_function_coverage=1 00:13:42.918 --rc genhtml_legend=1 00:13:42.918 --rc geninfo_all_blocks=1 00:13:42.918 --rc geninfo_unexecuted_blocks=1 00:13:42.918 00:13:42.918 ' 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.918 --rc genhtml_branch_coverage=1 00:13:42.918 --rc genhtml_function_coverage=1 00:13:42.918 --rc genhtml_legend=1 00:13:42.918 --rc geninfo_all_blocks=1 00:13:42.918 --rc geninfo_unexecuted_blocks=1 00:13:42.918 00:13:42.918 ' 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.918 --rc genhtml_branch_coverage=1 00:13:42.918 --rc genhtml_function_coverage=1 00:13:42.918 --rc genhtml_legend=1 00:13:42.918 --rc geninfo_all_blocks=1 00:13:42.918 --rc geninfo_unexecuted_blocks=1 00:13:42.918 00:13:42.918 ' 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.918 --rc genhtml_branch_coverage=1 00:13:42.918 --rc genhtml_function_coverage=1 00:13:42.918 --rc genhtml_legend=1 00:13:42.918 --rc geninfo_all_blocks=1 00:13:42.918 --rc geninfo_unexecuted_blocks=1 00:13:42.918 00:13:42.918 ' 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.918 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.919 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:49.624 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:49.624 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:49.624 Found net devices under 0000:86:00.0: cvl_0_0 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:49.624 Found net devices under 0000:86:00.1: cvl_0_1 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:49.624 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:49.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:13:49.625 00:13:49.625 --- 10.0.0.2 ping statistics --- 00:13:49.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.625 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:13:49.625 00:13:49.625 --- 10.0.0.1 ping statistics --- 00:13:49.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.625 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1641370 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1641370 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1641370 ']' 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 [2024-11-19 10:40:56.356621] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:13:49.625 [2024-11-19 10:40:56.356667] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.625 [2024-11-19 10:40:56.435998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.625 [2024-11-19 10:40:56.474900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.625 [2024-11-19 10:40:56.474936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.625 [2024-11-19 10:40:56.474942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.625 [2024-11-19 10:40:56.474952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.625 [2024-11-19 10:40:56.474957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.625 [2024-11-19 10:40:56.475508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 [2024-11-19 10:40:56.622347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 [2024-11-19 10:40:56.642534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 NULL1 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.625 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:49.625 [2024-11-19 10:40:56.703344] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:13:49.625 [2024-11-19 10:40:56.703390] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641390 ] 00:13:49.625 Attached to nqn.2016-06.io.spdk:cnode1 00:13:49.625 Namespace ID: 1 size: 1GB 00:13:49.625 fused_ordering(0) 00:13:49.625 fused_ordering(1) 00:13:49.625 fused_ordering(2) 00:13:49.625 fused_ordering(3) 00:13:49.625 fused_ordering(4) 00:13:49.625 fused_ordering(5) 00:13:49.625 fused_ordering(6) 00:13:49.625 fused_ordering(7) 00:13:49.625 fused_ordering(8) 00:13:49.625 fused_ordering(9) 00:13:49.625 fused_ordering(10) 00:13:49.625 fused_ordering(11) 00:13:49.625 fused_ordering(12) 00:13:49.625 fused_ordering(13) 00:13:49.625 fused_ordering(14) 00:13:49.625 fused_ordering(15) 00:13:49.625 fused_ordering(16) 00:13:49.625 fused_ordering(17) 00:13:49.625 fused_ordering(18) 00:13:49.625 fused_ordering(19) 00:13:49.625 fused_ordering(20) 00:13:49.625 fused_ordering(21) 00:13:49.625 fused_ordering(22) 00:13:49.625 fused_ordering(23) 00:13:49.626 fused_ordering(24) 00:13:49.626 fused_ordering(25) 00:13:49.626 fused_ordering(26) 00:13:49.626 fused_ordering(27) 00:13:49.626 fused_ordering(28) 00:13:49.626 fused_ordering(29) 00:13:49.626 fused_ordering(30) 00:13:49.626 fused_ordering(31) 00:13:49.626 fused_ordering(32) 00:13:49.626 fused_ordering(33) 00:13:49.626 fused_ordering(34) 00:13:49.626 fused_ordering(35) 00:13:49.626 fused_ordering(36) 00:13:49.626 fused_ordering(37) 00:13:49.626 fused_ordering(38) 00:13:49.626 fused_ordering(39) 00:13:49.626 fused_ordering(40) 00:13:49.626 fused_ordering(41) 00:13:49.626 fused_ordering(42) 00:13:49.626 fused_ordering(43) 00:13:49.626 fused_ordering(44) 00:13:49.626 fused_ordering(45) 00:13:49.626 fused_ordering(46) 00:13:49.626 fused_ordering(47) 00:13:49.626 fused_ordering(48) 00:13:49.626 fused_ordering(49) 00:13:49.626 fused_ordering(50) 00:13:49.626 fused_ordering(51) 00:13:49.626 fused_ordering(52) 00:13:49.626 fused_ordering(53) 00:13:49.626 fused_ordering(54) 00:13:49.626 fused_ordering(55) 00:13:49.626 fused_ordering(56) 00:13:49.626 fused_ordering(57) 00:13:49.626 fused_ordering(58) 00:13:49.626 fused_ordering(59) 00:13:49.626 fused_ordering(60) 00:13:49.626 fused_ordering(61) 00:13:49.626 fused_ordering(62) 00:13:49.626 fused_ordering(63) 00:13:49.626 fused_ordering(64) 00:13:49.626 fused_ordering(65) 00:13:49.626 fused_ordering(66) 00:13:49.626 fused_ordering(67) 00:13:49.626 fused_ordering(68) 00:13:49.626 fused_ordering(69) 00:13:49.626 fused_ordering(70) 00:13:49.626 fused_ordering(71) 00:13:49.626 fused_ordering(72) 00:13:49.626 fused_ordering(73) 00:13:49.626 fused_ordering(74) 00:13:49.626 fused_ordering(75) 00:13:49.626 fused_ordering(76) 00:13:49.626 fused_ordering(77) 00:13:49.626 fused_ordering(78) 00:13:49.626 fused_ordering(79) 00:13:49.626 fused_ordering(80) 00:13:49.626 fused_ordering(81) 00:13:49.626 fused_ordering(82) 00:13:49.626 fused_ordering(83) 00:13:49.626 fused_ordering(84) 00:13:49.626 fused_ordering(85) 00:13:49.626 fused_ordering(86) 00:13:49.626 fused_ordering(87) 00:13:49.626 fused_ordering(88) 00:13:49.626 fused_ordering(89) 00:13:49.626 fused_ordering(90) 00:13:49.626 fused_ordering(91) 00:13:49.626 fused_ordering(92) 00:13:49.626 fused_ordering(93) 00:13:49.626 fused_ordering(94) 00:13:49.626 fused_ordering(95) 00:13:49.626 fused_ordering(96) 00:13:49.626 fused_ordering(97) 00:13:49.626 fused_ordering(98) 00:13:49.626 fused_ordering(99) 00:13:49.626 fused_ordering(100) 00:13:49.626 fused_ordering(101) 00:13:49.626 fused_ordering(102) 00:13:49.626 fused_ordering(103) 00:13:49.626 fused_ordering(104) 00:13:49.626 fused_ordering(105) 00:13:49.626 fused_ordering(106) 00:13:49.626 fused_ordering(107) 00:13:49.626 fused_ordering(108) 00:13:49.626 fused_ordering(109) 00:13:49.626 fused_ordering(110) 00:13:49.626 fused_ordering(111) 00:13:49.626 fused_ordering(112) 00:13:49.626 fused_ordering(113) 00:13:49.626 fused_ordering(114) 00:13:49.626 fused_ordering(115) 00:13:49.626 fused_ordering(116) 00:13:49.626 fused_ordering(117) 00:13:49.626 fused_ordering(118) 00:13:49.626 fused_ordering(119) 00:13:49.626 fused_ordering(120) 00:13:49.626 fused_ordering(121) 00:13:49.626 fused_ordering(122) 00:13:49.626 fused_ordering(123) 00:13:49.626 fused_ordering(124) 00:13:49.626 fused_ordering(125) 00:13:49.626 fused_ordering(126) 00:13:49.626 fused_ordering(127) 00:13:49.626 fused_ordering(128) 00:13:49.626 fused_ordering(129) 00:13:49.626 fused_ordering(130) 00:13:49.626 fused_ordering(131) 00:13:49.626 fused_ordering(132) 00:13:49.626 fused_ordering(133) 00:13:49.626 fused_ordering(134) 00:13:49.626 fused_ordering(135) 00:13:49.626 fused_ordering(136) 00:13:49.626 fused_ordering(137) 00:13:49.626 fused_ordering(138) 00:13:49.626 fused_ordering(139) 00:13:49.626 fused_ordering(140) 00:13:49.626 fused_ordering(141) 00:13:49.626 fused_ordering(142) 00:13:49.626 fused_ordering(143) 00:13:49.626 fused_ordering(144) 00:13:49.626 fused_ordering(145) 00:13:49.626 fused_ordering(146) 00:13:49.626 fused_ordering(147) 00:13:49.626 fused_ordering(148) 00:13:49.626 fused_ordering(149) 00:13:49.626 fused_ordering(150) 00:13:49.626 fused_ordering(151) 00:13:49.626 fused_ordering(152) 00:13:49.626 fused_ordering(153) 00:13:49.626 fused_ordering(154) 00:13:49.626 fused_ordering(155) 00:13:49.626 fused_ordering(156) 00:13:49.626 fused_ordering(157) 00:13:49.626 fused_ordering(158) 00:13:49.626 fused_ordering(159) 00:13:49.626 fused_ordering(160) 00:13:49.626 fused_ordering(161) 00:13:49.626 fused_ordering(162) 00:13:49.626 fused_ordering(163) 00:13:49.626 fused_ordering(164) 00:13:49.626 fused_ordering(165) 00:13:49.626 fused_ordering(166) 00:13:49.626 fused_ordering(167) 00:13:49.626 fused_ordering(168) 00:13:49.626 fused_ordering(169) 00:13:49.626 fused_ordering(170) 00:13:49.626 fused_ordering(171) 00:13:49.626 fused_ordering(172) 00:13:49.626 fused_ordering(173) 00:13:49.626 fused_ordering(174) 00:13:49.626 fused_ordering(175) 00:13:49.626 fused_ordering(176) 00:13:49.626 fused_ordering(177) 00:13:49.626 fused_ordering(178) 00:13:49.626 fused_ordering(179) 00:13:49.626 fused_ordering(180) 00:13:49.626 fused_ordering(181) 00:13:49.626 fused_ordering(182) 00:13:49.626 fused_ordering(183) 00:13:49.626 fused_ordering(184) 00:13:49.626 fused_ordering(185) 00:13:49.626 fused_ordering(186) 00:13:49.626 fused_ordering(187) 00:13:49.626 fused_ordering(188) 00:13:49.626 fused_ordering(189) 00:13:49.626 fused_ordering(190) 00:13:49.626 fused_ordering(191) 00:13:49.626 fused_ordering(192) 00:13:49.626 fused_ordering(193) 00:13:49.626 fused_ordering(194) 00:13:49.626 fused_ordering(195) 00:13:49.626 fused_ordering(196) 00:13:49.626 fused_ordering(197) 00:13:49.626 fused_ordering(198) 00:13:49.626 fused_ordering(199) 00:13:49.626 fused_ordering(200) 00:13:49.626 fused_ordering(201) 00:13:49.626 fused_ordering(202) 00:13:49.626 fused_ordering(203) 00:13:49.626 fused_ordering(204) 00:13:49.626 fused_ordering(205) 00:13:49.884 fused_ordering(206) 00:13:49.884 fused_ordering(207) 00:13:49.884 fused_ordering(208) 00:13:49.884 fused_ordering(209) 00:13:49.884 fused_ordering(210) 00:13:49.884 fused_ordering(211) 00:13:49.884 fused_ordering(212) 00:13:49.884 fused_ordering(213) 00:13:49.884 fused_ordering(214) 00:13:49.884 fused_ordering(215) 00:13:49.884 fused_ordering(216) 00:13:49.884 fused_ordering(217) 00:13:49.884 fused_ordering(218) 00:13:49.884 fused_ordering(219) 00:13:49.884 fused_ordering(220) 00:13:49.884 fused_ordering(221) 00:13:49.884 fused_ordering(222) 00:13:49.884 fused_ordering(223) 00:13:49.884 fused_ordering(224) 00:13:49.884 fused_ordering(225) 00:13:49.884 fused_ordering(226) 00:13:49.884 fused_ordering(227) 00:13:49.884 fused_ordering(228) 00:13:49.884 fused_ordering(229) 00:13:49.884 fused_ordering(230) 00:13:49.884 fused_ordering(231) 00:13:49.884 fused_ordering(232) 00:13:49.884 fused_ordering(233) 00:13:49.884 fused_ordering(234) 00:13:49.884 fused_ordering(235) 00:13:49.884 fused_ordering(236) 00:13:49.884 fused_ordering(237) 00:13:49.884 fused_ordering(238) 00:13:49.884 fused_ordering(239) 00:13:49.884 fused_ordering(240) 00:13:49.884 fused_ordering(241) 00:13:49.884 fused_ordering(242) 00:13:49.884 fused_ordering(243) 00:13:49.884 fused_ordering(244) 00:13:49.884 fused_ordering(245) 00:13:49.884 fused_ordering(246) 00:13:49.884 fused_ordering(247) 00:13:49.884 fused_ordering(248) 00:13:49.884 fused_ordering(249) 00:13:49.884 fused_ordering(250) 00:13:49.884 fused_ordering(251) 00:13:49.884 fused_ordering(252) 00:13:49.884 fused_ordering(253) 00:13:49.884 fused_ordering(254) 00:13:49.884 fused_ordering(255) 00:13:49.884 fused_ordering(256) 00:13:49.884 fused_ordering(257) 00:13:49.884 fused_ordering(258) 00:13:49.884 fused_ordering(259) 00:13:49.884 fused_ordering(260) 00:13:49.884 fused_ordering(261) 00:13:49.884 fused_ordering(262) 00:13:49.884 fused_ordering(263) 00:13:49.884 fused_ordering(264) 00:13:49.884 fused_ordering(265) 00:13:49.884 fused_ordering(266) 00:13:49.884 fused_ordering(267) 00:13:49.884 fused_ordering(268) 00:13:49.884 fused_ordering(269) 00:13:49.884 fused_ordering(270) 00:13:49.884 fused_ordering(271) 00:13:49.884 fused_ordering(272) 00:13:49.884 fused_ordering(273) 00:13:49.884 fused_ordering(274) 00:13:49.884 fused_ordering(275) 00:13:49.884 fused_ordering(276) 00:13:49.884 fused_ordering(277) 00:13:49.884 fused_ordering(278) 00:13:49.884 fused_ordering(279) 00:13:49.884 fused_ordering(280) 00:13:49.884 fused_ordering(281) 00:13:49.884 fused_ordering(282) 00:13:49.884 fused_ordering(283) 00:13:49.884 fused_ordering(284) 00:13:49.884 fused_ordering(285) 00:13:49.884 fused_ordering(286) 00:13:49.884 fused_ordering(287) 00:13:49.884 fused_ordering(288) 00:13:49.884 fused_ordering(289) 00:13:49.884 fused_ordering(290) 00:13:49.884 fused_ordering(291) 00:13:49.884 fused_ordering(292) 00:13:49.884 fused_ordering(293) 00:13:49.884 fused_ordering(294) 00:13:49.884 fused_ordering(295) 00:13:49.884 fused_ordering(296) 00:13:49.884 fused_ordering(297) 00:13:49.884 fused_ordering(298) 00:13:49.884 fused_ordering(299) 00:13:49.884 fused_ordering(300) 00:13:49.884 fused_ordering(301) 00:13:49.884 fused_ordering(302) 00:13:49.884 fused_ordering(303) 00:13:49.884 fused_ordering(304) 00:13:49.884 fused_ordering(305) 00:13:49.884 fused_ordering(306) 00:13:49.884 fused_ordering(307) 00:13:49.884 fused_ordering(308) 00:13:49.884 fused_ordering(309) 00:13:49.884 fused_ordering(310) 00:13:49.884 fused_ordering(311) 00:13:49.884 fused_ordering(312) 00:13:49.884 fused_ordering(313) 00:13:49.884 fused_ordering(314) 00:13:49.884 fused_ordering(315) 00:13:49.884 fused_ordering(316) 00:13:49.884 fused_ordering(317) 00:13:49.884 fused_ordering(318) 00:13:49.884 fused_ordering(319) 00:13:49.884 fused_ordering(320) 00:13:49.884 fused_ordering(321) 00:13:49.884 fused_ordering(322) 00:13:49.884 fused_ordering(323) 00:13:49.884 fused_ordering(324) 00:13:49.884 fused_ordering(325) 00:13:49.884 fused_ordering(326) 00:13:49.884 fused_ordering(327) 00:13:49.884 fused_ordering(328) 00:13:49.884 fused_ordering(329) 00:13:49.884 fused_ordering(330) 00:13:49.884 fused_ordering(331) 00:13:49.884 fused_ordering(332) 00:13:49.884 fused_ordering(333) 00:13:49.884 fused_ordering(334) 00:13:49.884 fused_ordering(335) 00:13:49.884 fused_ordering(336) 00:13:49.884 fused_ordering(337) 00:13:49.884 fused_ordering(338) 00:13:49.884 fused_ordering(339) 00:13:49.884 fused_ordering(340) 00:13:49.884 fused_ordering(341) 00:13:49.884 fused_ordering(342) 00:13:49.884 fused_ordering(343) 00:13:49.884 fused_ordering(344) 00:13:49.884 fused_ordering(345) 00:13:49.884 fused_ordering(346) 00:13:49.884 fused_ordering(347) 00:13:49.884 fused_ordering(348) 00:13:49.884 fused_ordering(349) 00:13:49.884 fused_ordering(350) 00:13:49.884 fused_ordering(351) 00:13:49.884 fused_ordering(352) 00:13:49.884 fused_ordering(353) 00:13:49.884 fused_ordering(354) 00:13:49.884 fused_ordering(355) 00:13:49.884 fused_ordering(356) 00:13:49.884 fused_ordering(357) 00:13:49.884 fused_ordering(358) 00:13:49.884 fused_ordering(359) 00:13:49.884 fused_ordering(360) 00:13:49.884 fused_ordering(361) 00:13:49.884 fused_ordering(362) 00:13:49.884 fused_ordering(363) 00:13:49.884 fused_ordering(364) 00:13:49.884 fused_ordering(365) 00:13:49.884 fused_ordering(366) 00:13:49.884 fused_ordering(367) 00:13:49.884 fused_ordering(368) 00:13:49.884 fused_ordering(369) 00:13:49.884 fused_ordering(370) 00:13:49.884 fused_ordering(371) 00:13:49.885 fused_ordering(372) 00:13:49.885 fused_ordering(373) 00:13:49.885 fused_ordering(374) 00:13:49.885 fused_ordering(375) 00:13:49.885 fused_ordering(376) 00:13:49.885 fused_ordering(377) 00:13:49.885 fused_ordering(378) 00:13:49.885 fused_ordering(379) 00:13:49.885 fused_ordering(380) 00:13:49.885 fused_ordering(381) 00:13:49.885 fused_ordering(382) 00:13:49.885 fused_ordering(383) 00:13:49.885 fused_ordering(384) 00:13:49.885 fused_ordering(385) 00:13:49.885 fused_ordering(386) 00:13:49.885 fused_ordering(387) 00:13:49.885 fused_ordering(388) 00:13:49.885 fused_ordering(389) 00:13:49.885 fused_ordering(390) 00:13:49.885 fused_ordering(391) 00:13:49.885 fused_ordering(392) 00:13:49.885 fused_ordering(393) 00:13:49.885 fused_ordering(394) 00:13:49.885 fused_ordering(395) 00:13:49.885 fused_ordering(396) 00:13:49.885 fused_ordering(397) 00:13:49.885 fused_ordering(398) 00:13:49.885 fused_ordering(399) 00:13:49.885 fused_ordering(400) 00:13:49.885 fused_ordering(401) 00:13:49.885 fused_ordering(402) 00:13:49.885 fused_ordering(403) 00:13:49.885 fused_ordering(404) 00:13:49.885 fused_ordering(405) 00:13:49.885 fused_ordering(406) 00:13:49.885 fused_ordering(407) 00:13:49.885 fused_ordering(408) 00:13:49.885 fused_ordering(409) 00:13:49.885 fused_ordering(410) 00:13:50.142 fused_ordering(411) 00:13:50.142 fused_ordering(412) 00:13:50.142 fused_ordering(413) 00:13:50.142 fused_ordering(414) 00:13:50.142 fused_ordering(415) 00:13:50.142 fused_ordering(416) 00:13:50.142 fused_ordering(417) 00:13:50.142 fused_ordering(418) 00:13:50.142 fused_ordering(419) 00:13:50.142 fused_ordering(420) 00:13:50.142 fused_ordering(421) 00:13:50.142 fused_ordering(422) 00:13:50.142 fused_ordering(423) 00:13:50.142 fused_ordering(424) 00:13:50.142 fused_ordering(425) 00:13:50.142 fused_ordering(426) 00:13:50.142 fused_ordering(427) 00:13:50.142 fused_ordering(428) 00:13:50.142 fused_ordering(429) 00:13:50.142 fused_ordering(430) 00:13:50.142 fused_ordering(431) 00:13:50.142 fused_ordering(432) 00:13:50.142 fused_ordering(433) 00:13:50.142 fused_ordering(434) 00:13:50.142 fused_ordering(435) 00:13:50.142 fused_ordering(436) 00:13:50.142 fused_ordering(437) 00:13:50.142 fused_ordering(438) 00:13:50.142 fused_ordering(439) 00:13:50.142 fused_ordering(440) 00:13:50.142 fused_ordering(441) 00:13:50.142 fused_ordering(442) 00:13:50.142 fused_ordering(443) 00:13:50.142 fused_ordering(444) 00:13:50.142 fused_ordering(445) 00:13:50.142 fused_ordering(446) 00:13:50.142 fused_ordering(447) 00:13:50.142 fused_ordering(448) 00:13:50.142 fused_ordering(449) 00:13:50.142 fused_ordering(450) 00:13:50.142 fused_ordering(451) 00:13:50.143 fused_ordering(452) 00:13:50.143 fused_ordering(453) 00:13:50.143 fused_ordering(454) 00:13:50.143 fused_ordering(455) 00:13:50.143 fused_ordering(456) 00:13:50.143 fused_ordering(457) 00:13:50.143 fused_ordering(458) 00:13:50.143 fused_ordering(459) 00:13:50.143 fused_ordering(460) 00:13:50.143 fused_ordering(461) 00:13:50.143 fused_ordering(462) 00:13:50.143 fused_ordering(463) 00:13:50.143 fused_ordering(464) 00:13:50.143 fused_ordering(465) 00:13:50.143 fused_ordering(466) 00:13:50.143 fused_ordering(467) 00:13:50.143 fused_ordering(468) 00:13:50.143 fused_ordering(469) 00:13:50.143 fused_ordering(470) 00:13:50.143 fused_ordering(471) 00:13:50.143 fused_ordering(472) 00:13:50.143 fused_ordering(473) 00:13:50.143 fused_ordering(474) 00:13:50.143 fused_ordering(475) 00:13:50.143 fused_ordering(476) 00:13:50.143 fused_ordering(477) 00:13:50.143 fused_ordering(478) 00:13:50.143 fused_ordering(479) 00:13:50.143 fused_ordering(480) 00:13:50.143 fused_ordering(481) 00:13:50.143 fused_ordering(482) 00:13:50.143 fused_ordering(483) 00:13:50.143 fused_ordering(484) 00:13:50.143 fused_ordering(485) 00:13:50.143 fused_ordering(486) 00:13:50.143 fused_ordering(487) 00:13:50.143 fused_ordering(488) 00:13:50.143 fused_ordering(489) 00:13:50.143 fused_ordering(490) 00:13:50.143 fused_ordering(491) 00:13:50.143 fused_ordering(492) 00:13:50.143 fused_ordering(493) 00:13:50.143 fused_ordering(494) 00:13:50.143 fused_ordering(495) 00:13:50.143 fused_ordering(496) 00:13:50.143 fused_ordering(497) 00:13:50.143 fused_ordering(498) 00:13:50.143 fused_ordering(499) 00:13:50.143 fused_ordering(500) 00:13:50.143 fused_ordering(501) 00:13:50.143 fused_ordering(502) 00:13:50.143 fused_ordering(503) 00:13:50.143 fused_ordering(504) 00:13:50.143 fused_ordering(505) 00:13:50.143 fused_ordering(506) 00:13:50.143 fused_ordering(507) 00:13:50.143 fused_ordering(508) 00:13:50.143 fused_ordering(509) 00:13:50.143 fused_ordering(510) 00:13:50.143 fused_ordering(511) 00:13:50.143 fused_ordering(512) 00:13:50.143 fused_ordering(513) 00:13:50.143 fused_ordering(514) 00:13:50.143 fused_ordering(515) 00:13:50.143 fused_ordering(516) 00:13:50.143 fused_ordering(517) 00:13:50.143 fused_ordering(518) 00:13:50.143 fused_ordering(519) 00:13:50.143 fused_ordering(520) 00:13:50.143 fused_ordering(521) 00:13:50.143 fused_ordering(522) 00:13:50.143 fused_ordering(523) 00:13:50.143 fused_ordering(524) 00:13:50.143 fused_ordering(525) 00:13:50.143 fused_ordering(526) 00:13:50.143 fused_ordering(527) 00:13:50.143 fused_ordering(528) 00:13:50.143 fused_ordering(529) 00:13:50.143 fused_ordering(530) 00:13:50.143 fused_ordering(531) 00:13:50.143 fused_ordering(532) 00:13:50.143 fused_ordering(533) 00:13:50.143 fused_ordering(534) 00:13:50.143 fused_ordering(535) 00:13:50.143 fused_ordering(536) 00:13:50.143 fused_ordering(537) 00:13:50.143 fused_ordering(538) 00:13:50.143 fused_ordering(539) 00:13:50.143 fused_ordering(540) 00:13:50.143 fused_ordering(541) 00:13:50.143 fused_ordering(542) 00:13:50.143 fused_ordering(543) 00:13:50.143 fused_ordering(544) 00:13:50.143 fused_ordering(545) 00:13:50.143 fused_ordering(546) 00:13:50.143 fused_ordering(547) 00:13:50.143 fused_ordering(548) 00:13:50.143 fused_ordering(549) 00:13:50.143 fused_ordering(550) 00:13:50.143 fused_ordering(551) 00:13:50.143 fused_ordering(552) 00:13:50.143 fused_ordering(553) 00:13:50.143 fused_ordering(554) 00:13:50.143 fused_ordering(555) 00:13:50.143 fused_ordering(556) 00:13:50.143 fused_ordering(557) 00:13:50.143 fused_ordering(558) 00:13:50.143 fused_ordering(559) 00:13:50.143 fused_ordering(560) 00:13:50.143 fused_ordering(561) 00:13:50.143 fused_ordering(562) 00:13:50.143 fused_ordering(563) 00:13:50.143 fused_ordering(564) 00:13:50.143 fused_ordering(565) 00:13:50.143 fused_ordering(566) 00:13:50.143 fused_ordering(567) 00:13:50.143 fused_ordering(568) 00:13:50.143 fused_ordering(569) 00:13:50.143 fused_ordering(570) 00:13:50.143 fused_ordering(571) 00:13:50.143 fused_ordering(572) 00:13:50.143 fused_ordering(573) 00:13:50.143 fused_ordering(574) 00:13:50.143 fused_ordering(575) 00:13:50.143 fused_ordering(576) 00:13:50.143 fused_ordering(577) 00:13:50.143 fused_ordering(578) 00:13:50.143 fused_ordering(579) 00:13:50.143 fused_ordering(580) 00:13:50.143 fused_ordering(581) 00:13:50.143 fused_ordering(582) 00:13:50.143 fused_ordering(583) 00:13:50.143 fused_ordering(584) 00:13:50.143 fused_ordering(585) 00:13:50.143 fused_ordering(586) 00:13:50.143 fused_ordering(587) 00:13:50.143 fused_ordering(588) 00:13:50.143 fused_ordering(589) 00:13:50.143 fused_ordering(590) 00:13:50.143 fused_ordering(591) 00:13:50.143 fused_ordering(592) 00:13:50.143 fused_ordering(593) 00:13:50.143 fused_ordering(594) 00:13:50.143 fused_ordering(595) 00:13:50.143 fused_ordering(596) 00:13:50.143 fused_ordering(597) 00:13:50.143 fused_ordering(598) 00:13:50.143 fused_ordering(599) 00:13:50.143 fused_ordering(600) 00:13:50.143 fused_ordering(601) 00:13:50.143 fused_ordering(602) 00:13:50.143 fused_ordering(603) 00:13:50.143 fused_ordering(604) 00:13:50.143 fused_ordering(605) 00:13:50.143 fused_ordering(606) 00:13:50.143 fused_ordering(607) 00:13:50.143 fused_ordering(608) 00:13:50.143 fused_ordering(609) 00:13:50.143 fused_ordering(610) 00:13:50.143 fused_ordering(611) 00:13:50.143 fused_ordering(612) 00:13:50.143 fused_ordering(613) 00:13:50.143 fused_ordering(614) 00:13:50.143 fused_ordering(615) 00:13:50.708 fused_ordering(616) 00:13:50.708 fused_ordering(617) 00:13:50.708 fused_ordering(618) 00:13:50.708 fused_ordering(619) 00:13:50.708 fused_ordering(620) 00:13:50.708 fused_ordering(621) 00:13:50.708 fused_ordering(622) 00:13:50.708 fused_ordering(623) 00:13:50.708 fused_ordering(624) 00:13:50.708 fused_ordering(625) 00:13:50.708 fused_ordering(626) 00:13:50.708 fused_ordering(627) 00:13:50.708 fused_ordering(628) 00:13:50.708 fused_ordering(629) 00:13:50.708 fused_ordering(630) 00:13:50.708 fused_ordering(631) 00:13:50.708 fused_ordering(632) 00:13:50.708 fused_ordering(633) 00:13:50.708 fused_ordering(634) 00:13:50.708 fused_ordering(635) 00:13:50.708 fused_ordering(636) 00:13:50.708 fused_ordering(637) 00:13:50.708 fused_ordering(638) 00:13:50.708 fused_ordering(639) 00:13:50.708 fused_ordering(640) 00:13:50.708 fused_ordering(641) 00:13:50.708 fused_ordering(642) 00:13:50.708 fused_ordering(643) 00:13:50.708 fused_ordering(644) 00:13:50.708 fused_ordering(645) 00:13:50.708 fused_ordering(646) 00:13:50.708 fused_ordering(647) 00:13:50.708 fused_ordering(648) 00:13:50.708 fused_ordering(649) 00:13:50.708 fused_ordering(650) 00:13:50.708 fused_ordering(651) 00:13:50.708 fused_ordering(652) 00:13:50.708 fused_ordering(653) 00:13:50.708 fused_ordering(654) 00:13:50.708 fused_ordering(655) 00:13:50.708 fused_ordering(656) 00:13:50.708 fused_ordering(657) 00:13:50.708 fused_ordering(658) 00:13:50.708 fused_ordering(659) 00:13:50.708 fused_ordering(660) 00:13:50.708 fused_ordering(661) 00:13:50.708 fused_ordering(662) 00:13:50.708 fused_ordering(663) 00:13:50.708 fused_ordering(664) 00:13:50.708 fused_ordering(665) 00:13:50.708 fused_ordering(666) 00:13:50.708 fused_ordering(667) 00:13:50.708 fused_ordering(668) 00:13:50.708 fused_ordering(669) 00:13:50.708 fused_ordering(670) 00:13:50.708 fused_ordering(671) 00:13:50.708 fused_ordering(672) 00:13:50.708 fused_ordering(673) 00:13:50.708 fused_ordering(674) 00:13:50.708 fused_ordering(675) 00:13:50.708 fused_ordering(676) 00:13:50.708 fused_ordering(677) 00:13:50.708 fused_ordering(678) 00:13:50.708 fused_ordering(679) 00:13:50.708 fused_ordering(680) 00:13:50.708 fused_ordering(681) 00:13:50.708 fused_ordering(682) 00:13:50.708 fused_ordering(683) 00:13:50.708 fused_ordering(684) 00:13:50.708 fused_ordering(685) 00:13:50.708 fused_ordering(686) 00:13:50.708 fused_ordering(687) 00:13:50.708 fused_ordering(688) 00:13:50.708 fused_ordering(689) 00:13:50.708 fused_ordering(690) 00:13:50.708 fused_ordering(691) 00:13:50.708 fused_ordering(692) 00:13:50.708 fused_ordering(693) 00:13:50.708 fused_ordering(694) 00:13:50.708 fused_ordering(695) 00:13:50.708 fused_ordering(696) 00:13:50.708 fused_ordering(697) 00:13:50.708 fused_ordering(698) 00:13:50.708 fused_ordering(699) 00:13:50.708 fused_ordering(700) 00:13:50.708 fused_ordering(701) 00:13:50.708 fused_ordering(702) 00:13:50.708 fused_ordering(703) 00:13:50.708 fused_ordering(704) 00:13:50.708 fused_ordering(705) 00:13:50.708 fused_ordering(706) 00:13:50.708 fused_ordering(707) 00:13:50.708 fused_ordering(708) 00:13:50.708 fused_ordering(709) 00:13:50.708 fused_ordering(710) 00:13:50.708 fused_ordering(711) 00:13:50.708 fused_ordering(712) 00:13:50.708 fused_ordering(713) 00:13:50.708 fused_ordering(714) 00:13:50.708 fused_ordering(715) 00:13:50.708 fused_ordering(716) 00:13:50.708 fused_ordering(717) 00:13:50.708 fused_ordering(718) 00:13:50.708 fused_ordering(719) 00:13:50.708 fused_ordering(720) 00:13:50.708 fused_ordering(721) 00:13:50.708 fused_ordering(722) 00:13:50.708 fused_ordering(723) 00:13:50.708 fused_ordering(724) 00:13:50.708 fused_ordering(725) 00:13:50.708 fused_ordering(726) 00:13:50.708 fused_ordering(727) 00:13:50.708 fused_ordering(728) 00:13:50.708 fused_ordering(729) 00:13:50.708 fused_ordering(730) 00:13:50.708 fused_ordering(731) 00:13:50.708 fused_ordering(732) 00:13:50.708 fused_ordering(733) 00:13:50.708 fused_ordering(734) 00:13:50.708 fused_ordering(735) 00:13:50.708 fused_ordering(736) 00:13:50.708 fused_ordering(737) 00:13:50.708 fused_ordering(738) 00:13:50.708 fused_ordering(739) 00:13:50.708 fused_ordering(740) 00:13:50.708 fused_ordering(741) 00:13:50.708 fused_ordering(742) 00:13:50.708 fused_ordering(743) 00:13:50.708 fused_ordering(744) 00:13:50.708 fused_ordering(745) 00:13:50.708 fused_ordering(746) 00:13:50.708 fused_ordering(747) 00:13:50.708 fused_ordering(748) 00:13:50.708 fused_ordering(749) 00:13:50.708 fused_ordering(750) 00:13:50.708 fused_ordering(751) 00:13:50.708 fused_ordering(752) 00:13:50.708 fused_ordering(753) 00:13:50.708 fused_ordering(754) 00:13:50.708 fused_ordering(755) 00:13:50.708 fused_ordering(756) 00:13:50.708 fused_ordering(757) 00:13:50.708 fused_ordering(758) 00:13:50.708 fused_ordering(759) 00:13:50.708 fused_ordering(760) 00:13:50.708 fused_ordering(761) 00:13:50.708 fused_ordering(762) 00:13:50.708 fused_ordering(763) 00:13:50.708 fused_ordering(764) 00:13:50.708 fused_ordering(765) 00:13:50.708 fused_ordering(766) 00:13:50.708 fused_ordering(767) 00:13:50.708 fused_ordering(768) 00:13:50.708 fused_ordering(769) 00:13:50.708 fused_ordering(770) 00:13:50.708 fused_ordering(771) 00:13:50.708 fused_ordering(772) 00:13:50.708 fused_ordering(773) 00:13:50.708 fused_ordering(774) 00:13:50.708 fused_ordering(775) 00:13:50.708 fused_ordering(776) 00:13:50.708 fused_ordering(777) 00:13:50.708 fused_ordering(778) 00:13:50.708 fused_ordering(779) 00:13:50.708 fused_ordering(780) 00:13:50.708 fused_ordering(781) 00:13:50.708 fused_ordering(782) 00:13:50.708 fused_ordering(783) 00:13:50.708 fused_ordering(784) 00:13:50.708 fused_ordering(785) 00:13:50.708 fused_ordering(786) 00:13:50.708 fused_ordering(787) 00:13:50.708 fused_ordering(788) 00:13:50.708 fused_ordering(789) 00:13:50.708 fused_ordering(790) 00:13:50.708 fused_ordering(791) 00:13:50.708 fused_ordering(792) 00:13:50.708 fused_ordering(793) 00:13:50.708 fused_ordering(794) 00:13:50.708 fused_ordering(795) 00:13:50.708 fused_ordering(796) 00:13:50.708 fused_ordering(797) 00:13:50.708 fused_ordering(798) 00:13:50.708 fused_ordering(799) 00:13:50.708 fused_ordering(800) 00:13:50.708 fused_ordering(801) 00:13:50.708 fused_ordering(802) 00:13:50.708 fused_ordering(803) 00:13:50.708 fused_ordering(804) 00:13:50.708 fused_ordering(805) 00:13:50.708 fused_ordering(806) 00:13:50.708 fused_ordering(807) 00:13:50.708 fused_ordering(808) 00:13:50.708 fused_ordering(809) 00:13:50.708 fused_ordering(810) 00:13:50.708 fused_ordering(811) 00:13:50.708 fused_ordering(812) 00:13:50.708 fused_ordering(813) 00:13:50.708 fused_ordering(814) 00:13:50.708 fused_ordering(815) 00:13:50.708 fused_ordering(816) 00:13:50.708 fused_ordering(817) 00:13:50.708 fused_ordering(818) 00:13:50.708 fused_ordering(819) 00:13:50.708 fused_ordering(820) 00:13:51.275 fused_ordering(821) 00:13:51.275 fused_ordering(822) 00:13:51.275 fused_ordering(823) 00:13:51.275 fused_ordering(824) 00:13:51.275 fused_ordering(825) 00:13:51.275 fused_ordering(826) 00:13:51.275 fused_ordering(827) 00:13:51.275 fused_ordering(828) 00:13:51.275 fused_ordering(829) 00:13:51.275 fused_ordering(830) 00:13:51.275 fused_ordering(831) 00:13:51.275 fused_ordering(832) 00:13:51.275 fused_ordering(833) 00:13:51.275 fused_ordering(834) 00:13:51.275 fused_ordering(835) 00:13:51.275 fused_ordering(836) 00:13:51.275 fused_ordering(837) 00:13:51.275 fused_ordering(838) 00:13:51.275 fused_ordering(839) 00:13:51.275 fused_ordering(840) 00:13:51.275 fused_ordering(841) 00:13:51.275 fused_ordering(842) 00:13:51.275 fused_ordering(843) 00:13:51.275 fused_ordering(844) 00:13:51.275 fused_ordering(845) 00:13:51.275 fused_ordering(846) 00:13:51.275 fused_ordering(847) 00:13:51.275 fused_ordering(848) 00:13:51.275 fused_ordering(849) 00:13:51.275 fused_ordering(850) 00:13:51.275 fused_ordering(851) 00:13:51.275 fused_ordering(852) 00:13:51.275 fused_ordering(853) 00:13:51.275 fused_ordering(854) 00:13:51.275 fused_ordering(855) 00:13:51.275 fused_ordering(856) 00:13:51.275 fused_ordering(857) 00:13:51.275 fused_ordering(858) 00:13:51.275 fused_ordering(859) 00:13:51.275 fused_ordering(860) 00:13:51.275 fused_ordering(861) 00:13:51.275 fused_ordering(862) 00:13:51.275 fused_ordering(863) 00:13:51.275 fused_ordering(864) 00:13:51.275 fused_ordering(865) 00:13:51.275 fused_ordering(866) 00:13:51.275 fused_ordering(867) 00:13:51.275 fused_ordering(868) 00:13:51.275 fused_ordering(869) 00:13:51.275 fused_ordering(870) 00:13:51.275 fused_ordering(871) 00:13:51.275 fused_ordering(872) 00:13:51.275 fused_ordering(873) 00:13:51.275 fused_ordering(874) 00:13:51.275 fused_ordering(875) 00:13:51.275 fused_ordering(876) 00:13:51.275 fused_ordering(877) 00:13:51.275 fused_ordering(878) 00:13:51.275 fused_ordering(879) 00:13:51.275 fused_ordering(880) 00:13:51.275 fused_ordering(881) 00:13:51.275 fused_ordering(882) 00:13:51.275 fused_ordering(883) 00:13:51.275 fused_ordering(884) 00:13:51.275 fused_ordering(885) 00:13:51.275 fused_ordering(886) 00:13:51.275 fused_ordering(887) 00:13:51.275 fused_ordering(888) 00:13:51.275 fused_ordering(889) 00:13:51.275 fused_ordering(890) 00:13:51.275 fused_ordering(891) 00:13:51.275 fused_ordering(892) 00:13:51.275 fused_ordering(893) 00:13:51.275 fused_ordering(894) 00:13:51.275 fused_ordering(895) 00:13:51.275 fused_ordering(896) 00:13:51.275 fused_ordering(897) 00:13:51.275 fused_ordering(898) 00:13:51.275 fused_ordering(899) 00:13:51.275 fused_ordering(900) 00:13:51.275 fused_ordering(901) 00:13:51.275 fused_ordering(902) 00:13:51.275 fused_ordering(903) 00:13:51.275 fused_ordering(904) 00:13:51.275 fused_ordering(905) 00:13:51.275 fused_ordering(906) 00:13:51.275 fused_ordering(907) 00:13:51.275 fused_ordering(908) 00:13:51.275 fused_ordering(909) 00:13:51.275 fused_ordering(910) 00:13:51.275 fused_ordering(911) 00:13:51.275 fused_ordering(912) 00:13:51.275 fused_ordering(913) 00:13:51.275 fused_ordering(914) 00:13:51.275 fused_ordering(915) 00:13:51.275 fused_ordering(916) 00:13:51.275 fused_ordering(917) 00:13:51.275 fused_ordering(918) 00:13:51.275 fused_ordering(919) 00:13:51.275 fused_ordering(920) 00:13:51.275 fused_ordering(921) 00:13:51.275 fused_ordering(922) 00:13:51.275 fused_ordering(923) 00:13:51.275 fused_ordering(924) 00:13:51.275 fused_ordering(925) 00:13:51.275 fused_ordering(926) 00:13:51.275 fused_ordering(927) 00:13:51.275 fused_ordering(928) 00:13:51.275 fused_ordering(929) 00:13:51.275 fused_ordering(930) 00:13:51.275 fused_ordering(931) 00:13:51.275 fused_ordering(932) 00:13:51.275 fused_ordering(933) 00:13:51.275 fused_ordering(934) 00:13:51.275 fused_ordering(935) 00:13:51.275 fused_ordering(936) 00:13:51.275 fused_ordering(937) 00:13:51.275 fused_ordering(938) 00:13:51.275 fused_ordering(939) 00:13:51.275 fused_ordering(940) 00:13:51.275 fused_ordering(941) 00:13:51.275 fused_ordering(942) 00:13:51.276 fused_ordering(943) 00:13:51.276 fused_ordering(944) 00:13:51.276 fused_ordering(945) 00:13:51.276 fused_ordering(946) 00:13:51.276 fused_ordering(947) 00:13:51.276 fused_ordering(948) 00:13:51.276 fused_ordering(949) 00:13:51.276 fused_ordering(950) 00:13:51.276 fused_ordering(951) 00:13:51.276 fused_ordering(952) 00:13:51.276 fused_ordering(953) 00:13:51.276 fused_ordering(954) 00:13:51.276 fused_ordering(955) 00:13:51.276 fused_ordering(956) 00:13:51.276 fused_ordering(957) 00:13:51.276 fused_ordering(958) 00:13:51.276 fused_ordering(959) 00:13:51.276 fused_ordering(960) 00:13:51.276 fused_ordering(961) 00:13:51.276 fused_ordering(962) 00:13:51.276 fused_ordering(963) 00:13:51.276 fused_ordering(964) 00:13:51.276 fused_ordering(965) 00:13:51.276 fused_ordering(966) 00:13:51.276 fused_ordering(967) 00:13:51.276 fused_ordering(968) 00:13:51.276 fused_ordering(969) 00:13:51.276 fused_ordering(970) 00:13:51.276 fused_ordering(971) 00:13:51.276 fused_ordering(972) 00:13:51.276 fused_ordering(973) 00:13:51.276 fused_ordering(974) 00:13:51.276 fused_ordering(975) 00:13:51.276 fused_ordering(976) 00:13:51.276 fused_ordering(977) 00:13:51.276 fused_ordering(978) 00:13:51.276 fused_ordering(979) 00:13:51.276 fused_ordering(980) 00:13:51.276 fused_ordering(981) 00:13:51.276 fused_ordering(982) 00:13:51.276 fused_ordering(983) 00:13:51.276 fused_ordering(984) 00:13:51.276 fused_ordering(985) 00:13:51.276 fused_ordering(986) 00:13:51.276 fused_ordering(987) 00:13:51.276 fused_ordering(988) 00:13:51.276 fused_ordering(989) 00:13:51.276 fused_ordering(990) 00:13:51.276 fused_ordering(991) 00:13:51.276 fused_ordering(992) 00:13:51.276 fused_ordering(993) 00:13:51.276 fused_ordering(994) 00:13:51.276 fused_ordering(995) 00:13:51.276 fused_ordering(996) 00:13:51.276 fused_ordering(997) 00:13:51.276 fused_ordering(998) 00:13:51.276 fused_ordering(999) 00:13:51.276 fused_ordering(1000) 00:13:51.276 fused_ordering(1001) 00:13:51.276 fused_ordering(1002) 00:13:51.276 fused_ordering(1003) 00:13:51.276 fused_ordering(1004) 00:13:51.276 fused_ordering(1005) 00:13:51.276 fused_ordering(1006) 00:13:51.276 fused_ordering(1007) 00:13:51.276 fused_ordering(1008) 00:13:51.276 fused_ordering(1009) 00:13:51.276 fused_ordering(1010) 00:13:51.276 fused_ordering(1011) 00:13:51.276 fused_ordering(1012) 00:13:51.276 fused_ordering(1013) 00:13:51.276 fused_ordering(1014) 00:13:51.276 fused_ordering(1015) 00:13:51.276 fused_ordering(1016) 00:13:51.276 fused_ordering(1017) 00:13:51.276 fused_ordering(1018) 00:13:51.276 fused_ordering(1019) 00:13:51.276 fused_ordering(1020) 00:13:51.276 fused_ordering(1021) 00:13:51.276 fused_ordering(1022) 00:13:51.276 fused_ordering(1023) 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.276 rmmod nvme_tcp 00:13:51.276 rmmod nvme_fabrics 00:13:51.276 rmmod nvme_keyring 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1641370 ']' 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1641370 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1641370 ']' 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1641370 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1641370 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1641370' 00:13:51.276 killing process with pid 1641370 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1641370 00:13:51.276 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1641370 00:13:51.534 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:51.534 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:51.534 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:51.534 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:51.535 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:51.535 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:51.535 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:51.535 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:51.535 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:51.535 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.535 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.535 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.436 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:53.436 00:13:53.436 real 0m10.690s 00:13:53.436 user 0m5.022s 00:13:53.436 sys 0m5.816s 00:13:53.436 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.436 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:53.436 ************************************ 00:13:53.436 END TEST nvmf_fused_ordering 00:13:53.436 ************************************ 00:13:53.436 10:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:53.436 10:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:53.436 10:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.436 10:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.436 ************************************ 00:13:53.436 START TEST nvmf_ns_masking 00:13:53.436 ************************************ 00:13:53.436 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:53.696 * Looking for test storage... 00:13:53.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.696 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:53.696 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:53.696 10:41:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:53.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.696 --rc genhtml_branch_coverage=1 00:13:53.696 --rc genhtml_function_coverage=1 00:13:53.696 --rc genhtml_legend=1 00:13:53.696 --rc geninfo_all_blocks=1 00:13:53.696 --rc geninfo_unexecuted_blocks=1 00:13:53.696 00:13:53.696 ' 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:53.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.696 --rc genhtml_branch_coverage=1 00:13:53.696 --rc genhtml_function_coverage=1 00:13:53.696 --rc genhtml_legend=1 00:13:53.696 --rc geninfo_all_blocks=1 00:13:53.696 --rc geninfo_unexecuted_blocks=1 00:13:53.696 00:13:53.696 ' 00:13:53.696 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:53.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.696 --rc genhtml_branch_coverage=1 00:13:53.696 --rc genhtml_function_coverage=1 00:13:53.696 --rc genhtml_legend=1 00:13:53.696 --rc geninfo_all_blocks=1 00:13:53.697 --rc geninfo_unexecuted_blocks=1 00:13:53.697 00:13:53.697 ' 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.697 --rc genhtml_branch_coverage=1 00:13:53.697 --rc genhtml_function_coverage=1 00:13:53.697 --rc genhtml_legend=1 00:13:53.697 --rc geninfo_all_blocks=1 00:13:53.697 --rc geninfo_unexecuted_blocks=1 00:13:53.697 00:13:53.697 ' 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:53.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2869a437-85e7-4a73-88e4-148a42235824 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=eafd62c9-6540-410f-8ff7-3e1fa2eaab68 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=92b9cf7f-98cf-439c-b138-47ea7a5e8251 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:53.697 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:53.698 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.698 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.698 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.698 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:53.698 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:53.698 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:53.698 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:00.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:00.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:00.269 Found net devices under 0000:86:00.0: cvl_0_0 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:00.269 Found net devices under 0000:86:00.1: cvl_0_1 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.269 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:14:00.270 00:14:00.270 --- 10.0.0.2 ping statistics --- 00:14:00.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.270 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:14:00.270 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:14:00.270 00:14:00.270 --- 10.0.0.1 ping statistics --- 00:14:00.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.270 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1645344 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1645344 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1645344 ']' 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.270 [2024-11-19 10:41:07.102515] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:00.270 [2024-11-19 10:41:07.102561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.270 [2024-11-19 10:41:07.176881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.270 [2024-11-19 10:41:07.216362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.270 [2024-11-19 10:41:07.216395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.270 [2024-11-19 10:41:07.216402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.270 [2024-11-19 10:41:07.216408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.270 [2024-11-19 10:41:07.216413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.270 [2024-11-19 10:41:07.216929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:00.270 [2024-11-19 10:41:07.532081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:00.270 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:00.528 Malloc1 00:14:00.528 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:00.528 Malloc2 00:14:00.528 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:00.786 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:01.071 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.330 [2024-11-19 10:41:08.532816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.330 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:01.330 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 92b9cf7f-98cf-439c-b138-47ea7a5e8251 -a 10.0.0.2 -s 4420 -i 4 00:14:01.330 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:01.330 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:01.330 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.330 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:01.330 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:03.233 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.491 [ 0]:0x1 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=18f00602f02c41e5bc3b5ae23938b14a 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 18f00602f02c41e5bc3b5ae23938b14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.491 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:03.749 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:03.749 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.749 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.749 [ 0]:0x1 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=18f00602f02c41e5bc3b5ae23938b14a 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 18f00602f02c41e5bc3b5ae23938b14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.749 [ 1]:0x2 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=96cec69e4fb746d7ac742cd3b75ddafb 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 96cec69e4fb746d7ac742cd3b75ddafb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:03.749 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.007 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.265 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:04.523 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:04.523 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 92b9cf7f-98cf-439c-b138-47ea7a5e8251 -a 10.0.0.2 -s 4420 -i 4 00:14:04.523 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:04.523 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:04.523 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.523 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:04.523 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:04.523 10:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:07.055 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:07.055 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:07.055 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.056 10:41:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.056 [ 0]:0x2 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=96cec69e4fb746d7ac742cd3b75ddafb 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 96cec69e4fb746d7ac742cd3b75ddafb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.056 [ 0]:0x1 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=18f00602f02c41e5bc3b5ae23938b14a 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 18f00602f02c41e5bc3b5ae23938b14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.056 [ 1]:0x2 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=96cec69e4fb746d7ac742cd3b75ddafb 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 96cec69e4fb746d7ac742cd3b75ddafb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.056 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.315 [ 0]:0x2 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.315 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.574 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=96cec69e4fb746d7ac742cd3b75ddafb 00:14:07.574 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 96cec69e4fb746d7ac742cd3b75ddafb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.574 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:07.574 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.574 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.574 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:07.574 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 92b9cf7f-98cf-439c-b138-47ea7a5e8251 -a 10.0.0.2 -s 4420 -i 4 00:14:07.832 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:07.832 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:07.832 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.832 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:07.832 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:07.832 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.365 [ 0]:0x1 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=18f00602f02c41e5bc3b5ae23938b14a 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 18f00602f02c41e5bc3b5ae23938b14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.365 [ 1]:0x2 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=96cec69e4fb746d7ac742cd3b75ddafb 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 96cec69e4fb746d7ac742cd3b75ddafb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:10.365 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.366 [ 0]:0x2 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=96cec69e4fb746d7ac742cd3b75ddafb 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 96cec69e4fb746d7ac742cd3b75ddafb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:10.366 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.625 [2024-11-19 10:41:17.867335] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:10.625 request: 00:14:10.625 { 00:14:10.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.625 "nsid": 2, 00:14:10.625 "host": "nqn.2016-06.io.spdk:host1", 00:14:10.625 "method": "nvmf_ns_remove_host", 00:14:10.625 "req_id": 1 00:14:10.625 } 00:14:10.625 Got JSON-RPC error response 00:14:10.625 response: 00:14:10.625 { 00:14:10.625 "code": -32602, 00:14:10.625 "message": "Invalid parameters" 00:14:10.625 } 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.625 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.625 [ 0]:0x2 00:14:10.625 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.625 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.625 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=96cec69e4fb746d7ac742cd3b75ddafb 00:14:10.625 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 96cec69e4fb746d7ac742cd3b75ddafb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.625 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:10.625 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1647187 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1647187 /var/tmp/host.sock 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1647187 ']' 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:10.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.884 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.884 [2024-11-19 10:41:18.153474] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:10.884 [2024-11-19 10:41:18.153520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647187 ] 00:14:10.884 [2024-11-19 10:41:18.230064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.884 [2024-11-19 10:41:18.272404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.143 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.143 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:11.143 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.401 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.658 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2869a437-85e7-4a73-88e4-148a42235824 00:14:11.658 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:11.659 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2869A43785E74A7388E4148A42235824 -i 00:14:11.659 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid eafd62c9-6540-410f-8ff7-3e1fa2eaab68 00:14:11.659 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:11.659 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EAFD62C96540410F8FF73E1FA2EAAB68 -i 00:14:11.916 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.175 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:12.433 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:12.433 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:12.692 nvme0n1 00:14:12.692 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:12.692 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:13.259 nvme1n2 00:14:13.259 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:13.259 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:13.259 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:13.259 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:13.259 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:13.517 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:13.517 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:13.517 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:13.517 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:13.517 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2869a437-85e7-4a73-88e4-148a42235824 == \2\8\6\9\a\4\3\7\-\8\5\e\7\-\4\a\7\3\-\8\8\e\4\-\1\4\8\a\4\2\2\3\5\8\2\4 ]] 00:14:13.517 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:13.517 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:13.517 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:13.775 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ eafd62c9-6540-410f-8ff7-3e1fa2eaab68 == \e\a\f\d\6\2\c\9\-\6\5\4\0\-\4\1\0\f\-\8\f\f\7\-\3\e\1\f\a\2\e\a\a\b\6\8 ]] 00:14:13.775 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.034 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2869a437-85e7-4a73-88e4-148a42235824 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2869A43785E74A7388E4148A42235824 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2869A43785E74A7388E4148A42235824 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.293 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:14.294 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2869A43785E74A7388E4148A42235824 00:14:14.294 [2024-11-19 10:41:21.701934] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:14.294 [2024-11-19 10:41:21.701974] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:14.294 [2024-11-19 10:41:21.701984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.294 request: 00:14:14.294 { 00:14:14.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.294 "namespace": { 00:14:14.294 "bdev_name": "invalid", 00:14:14.294 "nsid": 1, 00:14:14.294 "nguid": "2869A43785E74A7388E4148A42235824", 00:14:14.294 "no_auto_visible": false 00:14:14.294 }, 00:14:14.294 "method": "nvmf_subsystem_add_ns", 00:14:14.294 "req_id": 1 00:14:14.294 } 00:14:14.294 Got JSON-RPC error response 00:14:14.294 response: 00:14:14.294 { 00:14:14.294 "code": -32602, 00:14:14.294 "message": "Invalid parameters" 00:14:14.294 } 00:14:14.294 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:14.294 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:14.294 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:14.294 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:14.294 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2869a437-85e7-4a73-88e4-148a42235824 00:14:14.294 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:14.294 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2869A43785E74A7388E4148A42235824 -i 00:14:14.553 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:17.085 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:17.085 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:17.085 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1647187 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1647187 ']' 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1647187 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647187 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647187' 00:14:17.085 killing process with pid 1647187 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1647187 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1647187 00:14:17.085 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.344 rmmod nvme_tcp 00:14:17.344 rmmod nvme_fabrics 00:14:17.344 rmmod nvme_keyring 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1645344 ']' 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1645344 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1645344 ']' 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1645344 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.344 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645344 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645344' 00:14:17.603 killing process with pid 1645344 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1645344 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1645344 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:17.603 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:17.603 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.603 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:17.603 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.603 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.603 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:20.141 00:14:20.141 real 0m26.193s 00:14:20.141 user 0m31.448s 00:14:20.141 sys 0m7.035s 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.141 ************************************ 00:14:20.141 END TEST nvmf_ns_masking 00:14:20.141 ************************************ 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.141 ************************************ 00:14:20.141 START TEST nvmf_nvme_cli 00:14:20.141 ************************************ 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:20.141 * Looking for test storage... 00:14:20.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.141 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:20.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.142 --rc genhtml_branch_coverage=1 00:14:20.142 --rc genhtml_function_coverage=1 00:14:20.142 --rc genhtml_legend=1 00:14:20.142 --rc geninfo_all_blocks=1 00:14:20.142 --rc geninfo_unexecuted_blocks=1 00:14:20.142 00:14:20.142 ' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:20.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.142 --rc genhtml_branch_coverage=1 00:14:20.142 --rc genhtml_function_coverage=1 00:14:20.142 --rc genhtml_legend=1 00:14:20.142 --rc geninfo_all_blocks=1 00:14:20.142 --rc geninfo_unexecuted_blocks=1 00:14:20.142 00:14:20.142 ' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:20.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.142 --rc genhtml_branch_coverage=1 00:14:20.142 --rc genhtml_function_coverage=1 00:14:20.142 --rc genhtml_legend=1 00:14:20.142 --rc geninfo_all_blocks=1 00:14:20.142 --rc geninfo_unexecuted_blocks=1 00:14:20.142 00:14:20.142 ' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:20.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.142 --rc genhtml_branch_coverage=1 00:14:20.142 --rc genhtml_function_coverage=1 00:14:20.142 --rc genhtml_legend=1 00:14:20.142 --rc geninfo_all_blocks=1 00:14:20.142 --rc geninfo_unexecuted_blocks=1 00:14:20.142 00:14:20.142 ' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:20.142 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:26.713 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:26.713 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:26.713 Found net devices under 0000:86:00.0: cvl_0_0 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:26.713 Found net devices under 0000:86:00.1: cvl_0_1 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:26.713 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.713 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.713 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.713 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.713 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:26.713 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:26.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:14:26.714 00:14:26.714 --- 10.0.0.2 ping statistics --- 00:14:26.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.714 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:14:26.714 00:14:26.714 --- 10.0.0.1 ping statistics --- 00:14:26.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.714 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1651870 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1651870 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1651870 ']' 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 [2024-11-19 10:41:33.328821] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:26.714 [2024-11-19 10:41:33.328874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.714 [2024-11-19 10:41:33.408820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.714 [2024-11-19 10:41:33.451813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.714 [2024-11-19 10:41:33.451855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.714 [2024-11-19 10:41:33.451863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.714 [2024-11-19 10:41:33.451868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.714 [2024-11-19 10:41:33.451873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.714 [2024-11-19 10:41:33.453486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.714 [2024-11-19 10:41:33.453601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.714 [2024-11-19 10:41:33.453632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.714 [2024-11-19 10:41:33.453633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 [2024-11-19 10:41:33.599125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 Malloc0 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 Malloc1 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 [2024-11-19 10:41:33.694745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:26.714 00:14:26.714 Discovery Log Number of Records 2, Generation counter 2 00:14:26.714 =====Discovery Log Entry 0====== 00:14:26.714 trtype: tcp 00:14:26.714 adrfam: ipv4 00:14:26.714 subtype: current discovery subsystem 00:14:26.714 treq: not required 00:14:26.714 portid: 0 00:14:26.714 trsvcid: 4420 00:14:26.714 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:26.714 traddr: 10.0.0.2 00:14:26.714 eflags: explicit discovery connections, duplicate discovery information 00:14:26.714 sectype: none 00:14:26.714 =====Discovery Log Entry 1====== 00:14:26.714 trtype: tcp 00:14:26.714 adrfam: ipv4 00:14:26.714 subtype: nvme subsystem 00:14:26.714 treq: not required 00:14:26.714 portid: 0 00:14:26.714 trsvcid: 4420 00:14:26.714 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:26.714 traddr: 10.0.0.2 00:14:26.714 eflags: none 00:14:26.714 sectype: none 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:26.714 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:26.715 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:26.715 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:26.715 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:26.715 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:26.715 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:26.715 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.651 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:27.651 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:27.651 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.651 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:27.651 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:27.651 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:30.184 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:30.184 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:30.185 /dev/nvme0n2 ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:30.185 rmmod nvme_tcp 00:14:30.185 rmmod nvme_fabrics 00:14:30.185 rmmod nvme_keyring 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1651870 ']' 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1651870 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1651870 ']' 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1651870 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1651870 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1651870' 00:14:30.185 killing process with pid 1651870 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1651870 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1651870 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.185 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:32.719 00:14:32.719 real 0m12.464s 00:14:32.719 user 0m18.005s 00:14:32.719 sys 0m5.057s 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:32.719 ************************************ 00:14:32.719 END TEST nvmf_nvme_cli 00:14:32.719 ************************************ 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.719 ************************************ 00:14:32.719 START TEST nvmf_vfio_user 00:14:32.719 ************************************ 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:32.719 * Looking for test storage... 00:14:32.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.719 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:32.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.719 --rc genhtml_branch_coverage=1 00:14:32.719 --rc genhtml_function_coverage=1 00:14:32.719 --rc genhtml_legend=1 00:14:32.719 --rc geninfo_all_blocks=1 00:14:32.719 --rc geninfo_unexecuted_blocks=1 00:14:32.720 00:14:32.720 ' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:32.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.720 --rc genhtml_branch_coverage=1 00:14:32.720 --rc genhtml_function_coverage=1 00:14:32.720 --rc genhtml_legend=1 00:14:32.720 --rc geninfo_all_blocks=1 00:14:32.720 --rc geninfo_unexecuted_blocks=1 00:14:32.720 00:14:32.720 ' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:32.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.720 --rc genhtml_branch_coverage=1 00:14:32.720 --rc genhtml_function_coverage=1 00:14:32.720 --rc genhtml_legend=1 00:14:32.720 --rc geninfo_all_blocks=1 00:14:32.720 --rc geninfo_unexecuted_blocks=1 00:14:32.720 00:14:32.720 ' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:32.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.720 --rc genhtml_branch_coverage=1 00:14:32.720 --rc genhtml_function_coverage=1 00:14:32.720 --rc genhtml_legend=1 00:14:32.720 --rc geninfo_all_blocks=1 00:14:32.720 --rc geninfo_unexecuted_blocks=1 00:14:32.720 00:14:32.720 ' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:32.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1653155 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1653155' 00:14:32.720 Process pid: 1653155 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1653155 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1653155 ']' 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.720 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:32.720 [2024-11-19 10:41:39.957347] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:32.720 [2024-11-19 10:41:39.957393] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.720 [2024-11-19 10:41:40.031906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.720 [2024-11-19 10:41:40.081121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.720 [2024-11-19 10:41:40.081157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.720 [2024-11-19 10:41:40.081164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.720 [2024-11-19 10:41:40.081170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.720 [2024-11-19 10:41:40.081176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.720 [2024-11-19 10:41:40.082659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.720 [2024-11-19 10:41:40.082766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.720 [2024-11-19 10:41:40.082871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.720 [2024-11-19 10:41:40.082873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.979 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.979 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:32.979 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:33.915 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:34.174 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:34.174 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:34.174 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:34.174 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:34.174 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:34.174 Malloc1 00:14:34.174 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:34.433 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:34.691 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:34.950 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:34.950 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:34.950 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:35.208 Malloc2 00:14:35.208 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:35.208 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:35.466 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:35.728 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:35.728 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:35.728 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.728 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:35.728 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:35.728 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:35.728 [2024-11-19 10:41:43.050448] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:35.728 [2024-11-19 10:41:43.050481] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653640 ] 00:14:35.728 [2024-11-19 10:41:43.092134] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:35.728 [2024-11-19 10:41:43.096439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:35.728 [2024-11-19 10:41:43.096461] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1677127000 00:14:35.728 [2024-11-19 10:41:43.097436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.728 [2024-11-19 10:41:43.098438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.728 [2024-11-19 10:41:43.099443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.728 [2024-11-19 10:41:43.100450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.728 [2024-11-19 10:41:43.101453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.728 [2024-11-19 10:41:43.102460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.728 [2024-11-19 10:41:43.103462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.728 [2024-11-19 10:41:43.104469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.728 [2024-11-19 10:41:43.105474] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:35.728 [2024-11-19 10:41:43.105483] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f167711c000 00:14:35.728 [2024-11-19 10:41:43.106428] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:35.728 [2024-11-19 10:41:43.118081] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:35.728 [2024-11-19 10:41:43.118102] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:35.728 [2024-11-19 10:41:43.123591] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:35.728 [2024-11-19 10:41:43.123629] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:35.728 [2024-11-19 10:41:43.123697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:35.728 [2024-11-19 10:41:43.123712] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:35.728 [2024-11-19 10:41:43.123717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:35.728 [2024-11-19 10:41:43.124586] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:35.728 [2024-11-19 10:41:43.124594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:35.728 [2024-11-19 10:41:43.124601] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:35.728 [2024-11-19 10:41:43.125590] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:35.728 [2024-11-19 10:41:43.125598] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:35.728 [2024-11-19 10:41:43.125604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:35.728 [2024-11-19 10:41:43.126600] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:35.728 [2024-11-19 10:41:43.126608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:35.728 [2024-11-19 10:41:43.127603] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:35.728 [2024-11-19 10:41:43.127610] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:35.728 [2024-11-19 10:41:43.127617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:35.728 [2024-11-19 10:41:43.127623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:35.729 [2024-11-19 10:41:43.127730] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:35.729 [2024-11-19 10:41:43.127734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:35.729 [2024-11-19 10:41:43.127739] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:35.729 [2024-11-19 10:41:43.128607] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:35.729 [2024-11-19 10:41:43.129613] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:35.729 [2024-11-19 10:41:43.130621] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:35.729 [2024-11-19 10:41:43.131623] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:35.729 [2024-11-19 10:41:43.131683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:35.729 [2024-11-19 10:41:43.132634] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:35.729 [2024-11-19 10:41:43.132641] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:35.729 [2024-11-19 10:41:43.132646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132663] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:35.729 [2024-11-19 10:41:43.132670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132684] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.729 [2024-11-19 10:41:43.132689] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.729 [2024-11-19 10:41:43.132692] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.729 [2024-11-19 10:41:43.132704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.729 [2024-11-19 10:41:43.132746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:35.729 [2024-11-19 10:41:43.132753] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:35.729 [2024-11-19 10:41:43.132758] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:35.729 [2024-11-19 10:41:43.132762] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:35.729 [2024-11-19 10:41:43.132766] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:35.729 [2024-11-19 10:41:43.132772] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:35.729 [2024-11-19 10:41:43.132777] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:35.729 [2024-11-19 10:41:43.132783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:35.729 [2024-11-19 10:41:43.132811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:35.729 [2024-11-19 10:41:43.132821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.729 [2024-11-19 10:41:43.132829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.729 [2024-11-19 10:41:43.132836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.729 [2024-11-19 10:41:43.132844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.729 [2024-11-19 10:41:43.132848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:35.729 [2024-11-19 10:41:43.132871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:35.729 [2024-11-19 10:41:43.132877] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:35.729 [2024-11-19 10:41:43.132882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:35.729 [2024-11-19 10:41:43.132915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:35.729 [2024-11-19 10:41:43.132968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.132982] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:35.729 [2024-11-19 10:41:43.132986] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:35.729 [2024-11-19 10:41:43.132989] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.729 [2024-11-19 10:41:43.132994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:35.729 [2024-11-19 10:41:43.133006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:35.729 [2024-11-19 10:41:43.133015] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:35.729 [2024-11-19 10:41:43.133025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.133032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.133038] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.729 [2024-11-19 10:41:43.133042] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.729 [2024-11-19 10:41:43.133045] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.729 [2024-11-19 10:41:43.133050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.729 [2024-11-19 10:41:43.133072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:35.729 [2024-11-19 10:41:43.133083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.133090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.133096] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.729 [2024-11-19 10:41:43.133099] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.729 [2024-11-19 10:41:43.133102] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.729 [2024-11-19 10:41:43.133108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.729 [2024-11-19 10:41:43.133120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:35.729 [2024-11-19 10:41:43.133127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.133133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.133139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:35.729 [2024-11-19 10:41:43.133145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:35.730 [2024-11-19 10:41:43.133149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:35.730 [2024-11-19 10:41:43.133154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:35.730 [2024-11-19 10:41:43.133159] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:35.730 [2024-11-19 10:41:43.133163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:35.730 [2024-11-19 10:41:43.133167] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:35.730 [2024-11-19 10:41:43.133183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:35.730 [2024-11-19 10:41:43.133192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:35.730 [2024-11-19 10:41:43.133203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:35.730 [2024-11-19 10:41:43.133213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:35.730 [2024-11-19 10:41:43.133223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:35.730 [2024-11-19 10:41:43.133231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:35.730 [2024-11-19 10:41:43.133241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:35.730 [2024-11-19 10:41:43.133253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:35.730 [2024-11-19 10:41:43.133264] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:35.730 [2024-11-19 10:41:43.133269] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:35.730 [2024-11-19 10:41:43.133272] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:35.730 [2024-11-19 10:41:43.133275] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:35.730 [2024-11-19 10:41:43.133278] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:35.730 [2024-11-19 10:41:43.133284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:35.730 [2024-11-19 10:41:43.133290] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:35.730 [2024-11-19 10:41:43.133294] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:35.730 [2024-11-19 10:41:43.133297] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.730 [2024-11-19 10:41:43.133302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:35.730 [2024-11-19 10:41:43.133309] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:35.730 [2024-11-19 10:41:43.133312] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.730 [2024-11-19 10:41:43.133315] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.730 [2024-11-19 10:41:43.133321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.730 [2024-11-19 10:41:43.133328] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:35.730 [2024-11-19 10:41:43.133332] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:35.730 [2024-11-19 10:41:43.133335] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.730 [2024-11-19 10:41:43.133340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:35.730 [2024-11-19 10:41:43.133346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:35.730 [2024-11-19 10:41:43.133358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:35.730 [2024-11-19 10:41:43.133367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:35.730 [2024-11-19 10:41:43.133375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:35.730 ===================================================== 00:14:35.730 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:35.730 ===================================================== 00:14:35.730 Controller Capabilities/Features 00:14:35.730 ================================ 00:14:35.730 Vendor ID: 4e58 00:14:35.730 Subsystem Vendor ID: 4e58 00:14:35.730 Serial Number: SPDK1 00:14:35.730 Model Number: SPDK bdev Controller 00:14:35.730 Firmware Version: 25.01 00:14:35.730 Recommended Arb Burst: 6 00:14:35.730 IEEE OUI Identifier: 8d 6b 50 00:14:35.730 Multi-path I/O 00:14:35.730 May have multiple subsystem ports: Yes 00:14:35.730 May have multiple controllers: Yes 00:14:35.730 Associated with SR-IOV VF: No 00:14:35.730 Max Data Transfer Size: 131072 00:14:35.730 Max Number of Namespaces: 32 00:14:35.730 Max Number of I/O Queues: 127 00:14:35.730 NVMe Specification Version (VS): 1.3 00:14:35.730 NVMe Specification Version (Identify): 1.3 00:14:35.730 Maximum Queue Entries: 256 00:14:35.730 Contiguous Queues Required: Yes 00:14:35.730 Arbitration Mechanisms Supported 00:14:35.730 Weighted Round Robin: Not Supported 00:14:35.730 Vendor Specific: Not Supported 00:14:35.730 Reset Timeout: 15000 ms 00:14:35.730 Doorbell Stride: 4 bytes 00:14:35.730 NVM Subsystem Reset: Not Supported 00:14:35.730 Command Sets Supported 00:14:35.730 NVM Command Set: Supported 00:14:35.730 Boot Partition: Not Supported 00:14:35.730 Memory Page Size Minimum: 4096 bytes 00:14:35.730 Memory Page Size Maximum: 4096 bytes 00:14:35.730 Persistent Memory Region: Not Supported 00:14:35.730 Optional Asynchronous Events Supported 00:14:35.730 Namespace Attribute Notices: Supported 00:14:35.730 Firmware Activation Notices: Not Supported 00:14:35.730 ANA Change Notices: Not Supported 00:14:35.730 PLE Aggregate Log Change Notices: Not Supported 00:14:35.730 LBA Status Info Alert Notices: Not Supported 00:14:35.730 EGE Aggregate Log Change Notices: Not Supported 00:14:35.730 Normal NVM Subsystem Shutdown event: Not Supported 00:14:35.730 Zone Descriptor Change Notices: Not Supported 00:14:35.730 Discovery Log Change Notices: Not Supported 00:14:35.730 Controller Attributes 00:14:35.730 128-bit Host Identifier: Supported 00:14:35.730 Non-Operational Permissive Mode: Not Supported 00:14:35.730 NVM Sets: Not Supported 00:14:35.730 Read Recovery Levels: Not Supported 00:14:35.730 Endurance Groups: Not Supported 00:14:35.730 Predictable Latency Mode: Not Supported 00:14:35.730 Traffic Based Keep ALive: Not Supported 00:14:35.730 Namespace Granularity: Not Supported 00:14:35.730 SQ Associations: Not Supported 00:14:35.730 UUID List: Not Supported 00:14:35.730 Multi-Domain Subsystem: Not Supported 00:14:35.730 Fixed Capacity Management: Not Supported 00:14:35.730 Variable Capacity Management: Not Supported 00:14:35.730 Delete Endurance Group: Not Supported 00:14:35.730 Delete NVM Set: Not Supported 00:14:35.730 Extended LBA Formats Supported: Not Supported 00:14:35.730 Flexible Data Placement Supported: Not Supported 00:14:35.730 00:14:35.730 Controller Memory Buffer Support 00:14:35.730 ================================ 00:14:35.730 Supported: No 00:14:35.730 00:14:35.730 Persistent Memory Region Support 00:14:35.730 ================================ 00:14:35.730 Supported: No 00:14:35.730 00:14:35.730 Admin Command Set Attributes 00:14:35.730 ============================ 00:14:35.730 Security Send/Receive: Not Supported 00:14:35.730 Format NVM: Not Supported 00:14:35.730 Firmware Activate/Download: Not Supported 00:14:35.730 Namespace Management: Not Supported 00:14:35.730 Device Self-Test: Not Supported 00:14:35.730 Directives: Not Supported 00:14:35.730 NVMe-MI: Not Supported 00:14:35.730 Virtualization Management: Not Supported 00:14:35.730 Doorbell Buffer Config: Not Supported 00:14:35.730 Get LBA Status Capability: Not Supported 00:14:35.730 Command & Feature Lockdown Capability: Not Supported 00:14:35.730 Abort Command Limit: 4 00:14:35.730 Async Event Request Limit: 4 00:14:35.730 Number of Firmware Slots: N/A 00:14:35.730 Firmware Slot 1 Read-Only: N/A 00:14:35.730 Firmware Activation Without Reset: N/A 00:14:35.730 Multiple Update Detection Support: N/A 00:14:35.730 Firmware Update Granularity: No Information Provided 00:14:35.730 Per-Namespace SMART Log: No 00:14:35.730 Asymmetric Namespace Access Log Page: Not Supported 00:14:35.730 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:35.730 Command Effects Log Page: Supported 00:14:35.730 Get Log Page Extended Data: Supported 00:14:35.730 Telemetry Log Pages: Not Supported 00:14:35.730 Persistent Event Log Pages: Not Supported 00:14:35.730 Supported Log Pages Log Page: May Support 00:14:35.730 Commands Supported & Effects Log Page: Not Supported 00:14:35.730 Feature Identifiers & Effects Log Page:May Support 00:14:35.730 NVMe-MI Commands & Effects Log Page: May Support 00:14:35.730 Data Area 4 for Telemetry Log: Not Supported 00:14:35.730 Error Log Page Entries Supported: 128 00:14:35.730 Keep Alive: Supported 00:14:35.730 Keep Alive Granularity: 10000 ms 00:14:35.730 00:14:35.730 NVM Command Set Attributes 00:14:35.730 ========================== 00:14:35.730 Submission Queue Entry Size 00:14:35.730 Max: 64 00:14:35.730 Min: 64 00:14:35.730 Completion Queue Entry Size 00:14:35.730 Max: 16 00:14:35.730 Min: 16 00:14:35.730 Number of Namespaces: 32 00:14:35.731 Compare Command: Supported 00:14:35.731 Write Uncorrectable Command: Not Supported 00:14:35.731 Dataset Management Command: Supported 00:14:35.731 Write Zeroes Command: Supported 00:14:35.731 Set Features Save Field: Not Supported 00:14:35.731 Reservations: Not Supported 00:14:35.731 Timestamp: Not Supported 00:14:35.731 Copy: Supported 00:14:35.731 Volatile Write Cache: Present 00:14:35.731 Atomic Write Unit (Normal): 1 00:14:35.731 Atomic Write Unit (PFail): 1 00:14:35.731 Atomic Compare & Write Unit: 1 00:14:35.731 Fused Compare & Write: Supported 00:14:35.731 Scatter-Gather List 00:14:35.731 SGL Command Set: Supported (Dword aligned) 00:14:35.731 SGL Keyed: Not Supported 00:14:35.731 SGL Bit Bucket Descriptor: Not Supported 00:14:35.731 SGL Metadata Pointer: Not Supported 00:14:35.731 Oversized SGL: Not Supported 00:14:35.731 SGL Metadata Address: Not Supported 00:14:35.731 SGL Offset: Not Supported 00:14:35.731 Transport SGL Data Block: Not Supported 00:14:35.731 Replay Protected Memory Block: Not Supported 00:14:35.731 00:14:35.731 Firmware Slot Information 00:14:35.731 ========================= 00:14:35.731 Active slot: 1 00:14:35.731 Slot 1 Firmware Revision: 25.01 00:14:35.731 00:14:35.731 00:14:35.731 Commands Supported and Effects 00:14:35.731 ============================== 00:14:35.731 Admin Commands 00:14:35.731 -------------- 00:14:35.731 Get Log Page (02h): Supported 00:14:35.731 Identify (06h): Supported 00:14:35.731 Abort (08h): Supported 00:14:35.731 Set Features (09h): Supported 00:14:35.731 Get Features (0Ah): Supported 00:14:35.731 Asynchronous Event Request (0Ch): Supported 00:14:35.731 Keep Alive (18h): Supported 00:14:35.731 I/O Commands 00:14:35.731 ------------ 00:14:35.731 Flush (00h): Supported LBA-Change 00:14:35.731 Write (01h): Supported LBA-Change 00:14:35.731 Read (02h): Supported 00:14:35.731 Compare (05h): Supported 00:14:35.731 Write Zeroes (08h): Supported LBA-Change 00:14:35.731 Dataset Management (09h): Supported LBA-Change 00:14:35.731 Copy (19h): Supported LBA-Change 00:14:35.731 00:14:35.731 Error Log 00:14:35.731 ========= 00:14:35.731 00:14:35.731 Arbitration 00:14:35.731 =========== 00:14:35.731 Arbitration Burst: 1 00:14:35.731 00:14:35.731 Power Management 00:14:35.731 ================ 00:14:35.731 Number of Power States: 1 00:14:35.731 Current Power State: Power State #0 00:14:35.731 Power State #0: 00:14:35.731 Max Power: 0.00 W 00:14:35.731 Non-Operational State: Operational 00:14:35.731 Entry Latency: Not Reported 00:14:35.731 Exit Latency: Not Reported 00:14:35.731 Relative Read Throughput: 0 00:14:35.731 Relative Read Latency: 0 00:14:35.731 Relative Write Throughput: 0 00:14:35.731 Relative Write Latency: 0 00:14:35.731 Idle Power: Not Reported 00:14:35.731 Active Power: Not Reported 00:14:35.731 Non-Operational Permissive Mode: Not Supported 00:14:35.731 00:14:35.731 Health Information 00:14:35.731 ================== 00:14:35.731 Critical Warnings: 00:14:35.731 Available Spare Space: OK 00:14:35.731 Temperature: OK 00:14:35.731 Device Reliability: OK 00:14:35.731 Read Only: No 00:14:35.731 Volatile Memory Backup: OK 00:14:35.731 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:35.731 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:35.731 Available Spare: 0% 00:14:35.731 Available Sp[2024-11-19 10:41:43.133458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:35.731 [2024-11-19 10:41:43.133466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:35.731 [2024-11-19 10:41:43.133489] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:35.731 [2024-11-19 10:41:43.133497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.731 [2024-11-19 10:41:43.133503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.731 [2024-11-19 10:41:43.133509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.731 [2024-11-19 10:41:43.133514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.731 [2024-11-19 10:41:43.133641] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:35.731 [2024-11-19 10:41:43.133649] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:35.731 [2024-11-19 10:41:43.134650] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:35.731 [2024-11-19 10:41:43.134699] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:35.731 [2024-11-19 10:41:43.134705] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:35.731 [2024-11-19 10:41:43.135651] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:35.731 [2024-11-19 10:41:43.135661] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:35.731 [2024-11-19 10:41:43.135707] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:35.731 [2024-11-19 10:41:43.137686] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:35.731 are Threshold: 0% 00:14:35.731 Life Percentage Used: 0% 00:14:35.731 Data Units Read: 0 00:14:35.731 Data Units Written: 0 00:14:35.731 Host Read Commands: 0 00:14:35.731 Host Write Commands: 0 00:14:35.731 Controller Busy Time: 0 minutes 00:14:35.731 Power Cycles: 0 00:14:35.731 Power On Hours: 0 hours 00:14:35.731 Unsafe Shutdowns: 0 00:14:35.731 Unrecoverable Media Errors: 0 00:14:35.731 Lifetime Error Log Entries: 0 00:14:35.731 Warning Temperature Time: 0 minutes 00:14:35.731 Critical Temperature Time: 0 minutes 00:14:35.731 00:14:35.731 Number of Queues 00:14:35.731 ================ 00:14:35.731 Number of I/O Submission Queues: 127 00:14:35.731 Number of I/O Completion Queues: 127 00:14:35.731 00:14:35.731 Active Namespaces 00:14:35.731 ================= 00:14:35.731 Namespace ID:1 00:14:35.731 Error Recovery Timeout: Unlimited 00:14:35.731 Command Set Identifier: NVM (00h) 00:14:35.731 Deallocate: Supported 00:14:35.731 Deallocated/Unwritten Error: Not Supported 00:14:35.731 Deallocated Read Value: Unknown 00:14:35.731 Deallocate in Write Zeroes: Not Supported 00:14:35.731 Deallocated Guard Field: 0xFFFF 00:14:35.731 Flush: Supported 00:14:35.731 Reservation: Supported 00:14:35.731 Namespace Sharing Capabilities: Multiple Controllers 00:14:35.731 Size (in LBAs): 131072 (0GiB) 00:14:35.731 Capacity (in LBAs): 131072 (0GiB) 00:14:35.731 Utilization (in LBAs): 131072 (0GiB) 00:14:35.731 NGUID: 049FDE8ACFA648269FEA10FCDCEC405B 00:14:35.731 UUID: 049fde8a-cfa6-4826-9fea-10fcdcec405b 00:14:35.731 Thin Provisioning: Not Supported 00:14:35.731 Per-NS Atomic Units: Yes 00:14:35.731 Atomic Boundary Size (Normal): 0 00:14:35.731 Atomic Boundary Size (PFail): 0 00:14:35.731 Atomic Boundary Offset: 0 00:14:35.731 Maximum Single Source Range Length: 65535 00:14:35.731 Maximum Copy Length: 65535 00:14:35.731 Maximum Source Range Count: 1 00:14:35.731 NGUID/EUI64 Never Reused: No 00:14:35.731 Namespace Write Protected: No 00:14:35.731 Number of LBA Formats: 1 00:14:35.731 Current LBA Format: LBA Format #00 00:14:35.731 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:35.731 00:14:35.731 10:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:35.990 [2024-11-19 10:41:43.372795] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.255 Initializing NVMe Controllers 00:14:41.255 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.255 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:41.255 Initialization complete. Launching workers. 00:14:41.255 ======================================================== 00:14:41.255 Latency(us) 00:14:41.255 Device Information : IOPS MiB/s Average min max 00:14:41.255 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39901.71 155.87 3207.68 972.88 6646.17 00:14:41.255 ======================================================== 00:14:41.255 Total : 39901.71 155.87 3207.68 972.88 6646.17 00:14:41.256 00:14:41.256 [2024-11-19 10:41:48.390036] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.256 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:41.256 [2024-11-19 10:41:48.626086] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.520 Initializing NVMe Controllers 00:14:46.520 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:46.520 Initialization complete. Launching workers. 00:14:46.520 ======================================================== 00:14:46.520 Latency(us) 00:14:46.520 Device Information : IOPS MiB/s Average min max 00:14:46.520 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16060.66 62.74 7975.10 4985.95 9976.12 00:14:46.520 ======================================================== 00:14:46.521 Total : 16060.66 62.74 7975.10 4985.95 9976.12 00:14:46.521 00:14:46.521 [2024-11-19 10:41:53.667571] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.521 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:46.521 [2024-11-19 10:41:53.874562] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.955 [2024-11-19 10:41:58.940223] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.955 Initializing NVMe Controllers 00:14:51.955 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.955 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.955 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:51.955 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:51.955 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:51.955 Initialization complete. Launching workers. 00:14:51.955 Starting thread on core 2 00:14:51.955 Starting thread on core 3 00:14:51.955 Starting thread on core 1 00:14:51.955 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:51.956 [2024-11-19 10:41:59.246298] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.279 [2024-11-19 10:42:02.316758] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.279 Initializing NVMe Controllers 00:14:55.279 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.279 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.279 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:55.279 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:55.279 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:55.279 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:55.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:55.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:55.279 Initialization complete. Launching workers. 00:14:55.279 Starting thread on core 1 with urgent priority queue 00:14:55.279 Starting thread on core 2 with urgent priority queue 00:14:55.279 Starting thread on core 3 with urgent priority queue 00:14:55.279 Starting thread on core 0 with urgent priority queue 00:14:55.279 SPDK bdev Controller (SPDK1 ) core 0: 8175.00 IO/s 12.23 secs/100000 ios 00:14:55.279 SPDK bdev Controller (SPDK1 ) core 1: 7734.00 IO/s 12.93 secs/100000 ios 00:14:55.279 SPDK bdev Controller (SPDK1 ) core 2: 8575.33 IO/s 11.66 secs/100000 ios 00:14:55.279 SPDK bdev Controller (SPDK1 ) core 3: 7725.33 IO/s 12.94 secs/100000 ios 00:14:55.279 ======================================================== 00:14:55.279 00:14:55.279 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:55.279 [2024-11-19 10:42:02.613444] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.279 Initializing NVMe Controllers 00:14:55.279 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.279 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.279 Namespace ID: 1 size: 0GB 00:14:55.279 Initialization complete. 00:14:55.279 INFO: using host memory buffer for IO 00:14:55.279 Hello world! 00:14:55.279 [2024-11-19 10:42:02.646650] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.279 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:55.538 [2024-11-19 10:42:02.930316] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.915 Initializing NVMe Controllers 00:14:56.915 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.915 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.915 Initialization complete. Launching workers. 00:14:56.915 submit (in ns) avg, min, max = 5715.5, 3252.2, 4000684.3 00:14:56.915 complete (in ns) avg, min, max = 23488.6, 1836.5, 4000034.8 00:14:56.915 00:14:56.915 Submit histogram 00:14:56.915 ================ 00:14:56.915 Range in us Cumulative Count 00:14:56.915 3.242 - 3.256: 0.0061% ( 1) 00:14:56.915 3.270 - 3.283: 0.0369% ( 5) 00:14:56.915 3.283 - 3.297: 0.0737% ( 6) 00:14:56.915 3.297 - 3.311: 0.1413% ( 11) 00:14:56.915 3.311 - 3.325: 0.2826% ( 23) 00:14:56.915 3.325 - 3.339: 0.7557% ( 77) 00:14:56.915 3.339 - 3.353: 2.4329% ( 273) 00:14:56.915 3.353 - 3.367: 6.7826% ( 708) 00:14:56.915 3.367 - 3.381: 12.5637% ( 941) 00:14:56.915 3.381 - 3.395: 18.7995% ( 1015) 00:14:56.915 3.395 - 3.409: 25.1213% ( 1029) 00:14:56.915 3.409 - 3.423: 31.2097% ( 991) 00:14:56.915 3.423 - 3.437: 36.4871% ( 859) 00:14:56.915 3.437 - 3.450: 41.9549% ( 890) 00:14:56.915 3.450 - 3.464: 46.3169% ( 710) 00:14:56.915 3.464 - 3.478: 50.4638% ( 675) 00:14:56.915 3.478 - 3.492: 55.7535% ( 861) 00:14:56.915 3.492 - 3.506: 63.2303% ( 1217) 00:14:56.915 3.506 - 3.520: 68.9255% ( 927) 00:14:56.915 3.520 - 3.534: 73.1769% ( 692) 00:14:56.915 3.534 - 3.548: 78.3560% ( 843) 00:14:56.915 3.548 - 3.562: 82.4906% ( 673) 00:14:56.915 3.562 - 3.590: 86.7605% ( 695) 00:14:56.915 3.590 - 3.617: 87.7680% ( 164) 00:14:56.915 3.617 - 3.645: 88.3762% ( 99) 00:14:56.915 3.645 - 3.673: 89.6664% ( 210) 00:14:56.915 3.673 - 3.701: 91.5709% ( 310) 00:14:56.915 3.701 - 3.729: 93.2236% ( 269) 00:14:56.915 3.729 - 3.757: 95.0052% ( 290) 00:14:56.915 3.757 - 3.784: 96.5903% ( 258) 00:14:56.915 3.784 - 3.812: 97.8866% ( 211) 00:14:56.915 3.812 - 3.840: 98.7344% ( 138) 00:14:56.915 3.840 - 3.868: 99.1890% ( 74) 00:14:56.915 3.868 - 3.896: 99.4594% ( 44) 00:14:56.915 3.896 - 3.923: 99.5822% ( 20) 00:14:56.915 3.923 - 3.951: 99.6007% ( 3) 00:14:56.915 3.979 - 4.007: 99.6130% ( 2) 00:14:56.915 4.063 - 4.090: 99.6191% ( 1) 00:14:56.915 4.090 - 4.118: 99.6252% ( 1) 00:14:56.915 5.760 - 5.788: 99.6314% ( 1) 00:14:56.915 5.843 - 5.871: 99.6375% ( 1) 00:14:56.915 5.899 - 5.927: 99.6437% ( 1) 00:14:56.915 5.983 - 6.010: 99.6498% ( 1) 00:14:56.915 6.177 - 6.205: 99.6560% ( 1) 00:14:56.915 6.233 - 6.261: 99.6621% ( 1) 00:14:56.915 6.289 - 6.317: 99.6682% ( 1) 00:14:56.915 6.372 - 6.400: 99.6744% ( 1) 00:14:56.915 6.400 - 6.428: 99.6805% ( 1) 00:14:56.915 6.428 - 6.456: 99.6867% ( 1) 00:14:56.915 6.539 - 6.567: 99.6928% ( 1) 00:14:56.915 6.623 - 6.650: 99.6990% ( 1) 00:14:56.915 6.650 - 6.678: 99.7051% ( 1) 00:14:56.915 6.734 - 6.762: 99.7112% ( 1) 00:14:56.915 6.845 - 6.873: 99.7174% ( 1) 00:14:56.915 6.929 - 6.957: 99.7235% ( 1) 00:14:56.915 6.957 - 6.984: 99.7297% ( 1) 00:14:56.915 6.984 - 7.012: 99.7358% ( 1) 00:14:56.915 7.040 - 7.068: 99.7420% ( 1) 00:14:56.915 7.179 - 7.235: 99.7543% ( 2) 00:14:56.915 7.235 - 7.290: 99.7604% ( 1) 00:14:56.915 7.290 - 7.346: 99.7665% ( 1) 00:14:56.915 7.402 - 7.457: 99.7788% ( 2) 00:14:56.915 7.457 - 7.513: 99.7850% ( 1) 00:14:56.915 7.624 - 7.680: 99.7911% ( 1) 00:14:56.915 7.791 - 7.847: 99.7973% ( 1) 00:14:56.915 7.847 - 7.903: 99.8034% ( 1) 00:14:56.915 7.958 - 8.014: 99.8095% ( 1) 00:14:56.915 8.014 - 8.070: 99.8280% ( 3) 00:14:56.915 8.125 - 8.181: 99.8464% ( 3) 00:14:56.915 8.237 - 8.292: 99.8526% ( 1) 00:14:56.915 8.292 - 8.348: 99.8587% ( 1) 00:14:56.915 8.348 - 8.403: 99.8648% ( 1) 00:14:56.915 8.459 - 8.515: 99.8771% ( 2) 00:14:56.915 8.793 - 8.849: 99.8833% ( 1) 00:14:56.915 8.849 - 8.904: 99.8894% ( 1) 00:14:56.915 9.016 - 9.071: 99.8956% ( 1) 00:14:56.915 9.071 - 9.127: 99.9140% ( 3) 00:14:56.915 9.461 - 9.517: 99.9201% ( 1) 00:14:56.915 9.683 - 9.739: 99.9263% ( 1) 00:14:56.915 [2024-11-19 10:42:03.952277] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.915 10.741 - 10.797: 99.9324% ( 1) 00:14:56.915 12.800 - 12.856: 99.9386% ( 1) 00:14:56.915 14.915 - 15.026: 99.9447% ( 1) 00:14:56.915 3989.148 - 4017.642: 100.0000% ( 9) 00:14:56.915 00:14:56.915 Complete histogram 00:14:56.915 ================== 00:14:56.915 Range in us Cumulative Count 00:14:56.915 1.837 - 1.850: 0.2396% ( 39) 00:14:56.915 1.850 - 1.864: 0.7557% ( 84) 00:14:56.915 1.864 - 1.878: 1.8308% ( 175) 00:14:56.915 1.878 - 1.892: 6.2604% ( 721) 00:14:56.915 1.892 - 1.906: 44.1728% ( 6171) 00:14:56.915 1.906 - 1.920: 84.3153% ( 6534) 00:14:56.915 1.920 - 1.934: 95.0974% ( 1755) 00:14:56.915 1.934 - 1.948: 98.2614% ( 515) 00:14:56.916 1.948 - 1.962: 99.0047% ( 121) 00:14:56.916 1.962 - 1.976: 99.1215% ( 19) 00:14:56.916 1.976 - 1.990: 99.1583% ( 6) 00:14:56.916 1.990 - 2.003: 99.1768% ( 3) 00:14:56.916 2.003 - 2.017: 99.1829% ( 1) 00:14:56.916 2.017 - 2.031: 99.1890% ( 1) 00:14:56.916 2.031 - 2.045: 99.2013% ( 2) 00:14:56.916 2.045 - 2.059: 99.2075% ( 1) 00:14:56.916 2.059 - 2.073: 99.2136% ( 1) 00:14:56.916 2.073 - 2.087: 99.2198% ( 1) 00:14:56.916 2.087 - 2.101: 99.2259% ( 1) 00:14:56.916 2.129 - 2.143: 99.2382% ( 2) 00:14:56.916 2.143 - 2.157: 99.2443% ( 1) 00:14:56.916 2.268 - 2.282: 99.2505% ( 1) 00:14:56.916 2.296 - 2.310: 99.2566% ( 1) 00:14:56.916 2.532 - 2.546: 99.2628% ( 1) 00:14:56.916 4.369 - 4.397: 99.2689% ( 1) 00:14:56.916 4.563 - 4.591: 99.2751% ( 1) 00:14:56.916 4.647 - 4.675: 99.2873% ( 2) 00:14:56.916 4.814 - 4.842: 99.2935% ( 1) 00:14:56.916 5.148 - 5.176: 99.2996% ( 1) 00:14:56.916 5.370 - 5.398: 99.3058% ( 1) 00:14:56.916 5.482 - 5.510: 99.3119% ( 1) 00:14:56.916 5.649 - 5.677: 99.3181% ( 1) 00:14:56.916 5.677 - 5.704: 99.3242% ( 1) 00:14:56.916 5.843 - 5.871: 99.3303% ( 1) 00:14:56.916 6.094 - 6.122: 99.3365% ( 1) 00:14:56.916 6.122 - 6.150: 99.3426% ( 1) 00:14:56.916 6.177 - 6.205: 99.3488% ( 1) 00:14:56.916 6.372 - 6.400: 99.3549% ( 1) 00:14:56.916 6.400 - 6.428: 99.3611% ( 1) 00:14:56.916 6.428 - 6.456: 99.3733% ( 2) 00:14:56.916 6.483 - 6.511: 99.3795% ( 1) 00:14:56.916 6.539 - 6.567: 99.3856% ( 1) 00:14:56.916 6.567 - 6.595: 99.3918% ( 1) 00:14:56.916 6.706 - 6.734: 99.3979% ( 1) 00:14:56.916 6.762 - 6.790: 99.4041% ( 1) 00:14:56.916 6.790 - 6.817: 99.4102% ( 1) 00:14:56.916 6.845 - 6.873: 99.4225% ( 2) 00:14:56.916 6.901 - 6.929: 99.4286% ( 1) 00:14:56.916 6.984 - 7.012: 99.4348% ( 1) 00:14:56.916 7.096 - 7.123: 99.4409% ( 1) 00:14:56.916 7.235 - 7.290: 99.4471% ( 1) 00:14:56.916 7.569 - 7.624: 99.4532% ( 1) 00:14:56.916 7.847 - 7.903: 99.4594% ( 1) 00:14:56.916 3518.998 - 3533.245: 99.4655% ( 1) 00:14:56.916 3989.148 - 4017.642: 100.0000% ( 87) 00:14:56.916 00:14:56.916 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:56.916 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:56.916 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:56.916 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:56.916 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:56.916 [ 00:14:56.916 { 00:14:56.916 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.916 "subtype": "Discovery", 00:14:56.916 "listen_addresses": [], 00:14:56.916 "allow_any_host": true, 00:14:56.916 "hosts": [] 00:14:56.916 }, 00:14:56.916 { 00:14:56.916 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:56.916 "subtype": "NVMe", 00:14:56.916 "listen_addresses": [ 00:14:56.916 { 00:14:56.916 "trtype": "VFIOUSER", 00:14:56.916 "adrfam": "IPv4", 00:14:56.916 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:56.916 "trsvcid": "0" 00:14:56.916 } 00:14:56.916 ], 00:14:56.916 "allow_any_host": true, 00:14:56.916 "hosts": [], 00:14:56.916 "serial_number": "SPDK1", 00:14:56.916 "model_number": "SPDK bdev Controller", 00:14:56.916 "max_namespaces": 32, 00:14:56.916 "min_cntlid": 1, 00:14:56.916 "max_cntlid": 65519, 00:14:56.916 "namespaces": [ 00:14:56.916 { 00:14:56.916 "nsid": 1, 00:14:56.916 "bdev_name": "Malloc1", 00:14:56.916 "name": "Malloc1", 00:14:56.916 "nguid": "049FDE8ACFA648269FEA10FCDCEC405B", 00:14:56.916 "uuid": "049fde8a-cfa6-4826-9fea-10fcdcec405b" 00:14:56.916 } 00:14:56.916 ] 00:14:56.916 }, 00:14:56.916 { 00:14:56.916 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:56.916 "subtype": "NVMe", 00:14:56.916 "listen_addresses": [ 00:14:56.916 { 00:14:56.916 "trtype": "VFIOUSER", 00:14:56.916 "adrfam": "IPv4", 00:14:56.916 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:56.916 "trsvcid": "0" 00:14:56.916 } 00:14:56.916 ], 00:14:56.916 "allow_any_host": true, 00:14:56.916 "hosts": [], 00:14:56.916 "serial_number": "SPDK2", 00:14:56.916 "model_number": "SPDK bdev Controller", 00:14:56.916 "max_namespaces": 32, 00:14:56.916 "min_cntlid": 1, 00:14:56.916 "max_cntlid": 65519, 00:14:56.916 "namespaces": [ 00:14:56.916 { 00:14:56.916 "nsid": 1, 00:14:56.916 "bdev_name": "Malloc2", 00:14:56.916 "name": "Malloc2", 00:14:56.916 "nguid": "F60608EE6D09447CBB989125AD8A7006", 00:14:56.916 "uuid": "f60608ee-6d09-447c-bb98-9125ad8a7006" 00:14:56.916 } 00:14:56.916 ] 00:14:56.916 } 00:14:56.916 ] 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1657242 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:56.916 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:56.916 [2024-11-19 10:42:04.347362] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.175 Malloc3 00:14:57.175 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:57.175 [2024-11-19 10:42:04.604358] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.434 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:57.434 Asynchronous Event Request test 00:14:57.434 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.434 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.434 Registering asynchronous event callbacks... 00:14:57.435 Starting namespace attribute notice tests for all controllers... 00:14:57.435 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:57.435 aer_cb - Changed Namespace 00:14:57.435 Cleaning up... 00:14:57.435 [ 00:14:57.435 { 00:14:57.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:57.435 "subtype": "Discovery", 00:14:57.435 "listen_addresses": [], 00:14:57.435 "allow_any_host": true, 00:14:57.435 "hosts": [] 00:14:57.435 }, 00:14:57.435 { 00:14:57.435 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:57.435 "subtype": "NVMe", 00:14:57.435 "listen_addresses": [ 00:14:57.435 { 00:14:57.435 "trtype": "VFIOUSER", 00:14:57.435 "adrfam": "IPv4", 00:14:57.435 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:57.435 "trsvcid": "0" 00:14:57.435 } 00:14:57.435 ], 00:14:57.435 "allow_any_host": true, 00:14:57.435 "hosts": [], 00:14:57.435 "serial_number": "SPDK1", 00:14:57.435 "model_number": "SPDK bdev Controller", 00:14:57.435 "max_namespaces": 32, 00:14:57.435 "min_cntlid": 1, 00:14:57.435 "max_cntlid": 65519, 00:14:57.435 "namespaces": [ 00:14:57.435 { 00:14:57.435 "nsid": 1, 00:14:57.435 "bdev_name": "Malloc1", 00:14:57.435 "name": "Malloc1", 00:14:57.435 "nguid": "049FDE8ACFA648269FEA10FCDCEC405B", 00:14:57.435 "uuid": "049fde8a-cfa6-4826-9fea-10fcdcec405b" 00:14:57.435 }, 00:14:57.435 { 00:14:57.435 "nsid": 2, 00:14:57.435 "bdev_name": "Malloc3", 00:14:57.435 "name": "Malloc3", 00:14:57.435 "nguid": "D3B22C50869942128F9F7AF5CEF178FA", 00:14:57.435 "uuid": "d3b22c50-8699-4212-8f9f-7af5cef178fa" 00:14:57.435 } 00:14:57.435 ] 00:14:57.435 }, 00:14:57.435 { 00:14:57.435 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:57.435 "subtype": "NVMe", 00:14:57.435 "listen_addresses": [ 00:14:57.435 { 00:14:57.435 "trtype": "VFIOUSER", 00:14:57.435 "adrfam": "IPv4", 00:14:57.435 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:57.435 "trsvcid": "0" 00:14:57.435 } 00:14:57.435 ], 00:14:57.435 "allow_any_host": true, 00:14:57.435 "hosts": [], 00:14:57.435 "serial_number": "SPDK2", 00:14:57.435 "model_number": "SPDK bdev Controller", 00:14:57.435 "max_namespaces": 32, 00:14:57.435 "min_cntlid": 1, 00:14:57.435 "max_cntlid": 65519, 00:14:57.435 "namespaces": [ 00:14:57.435 { 00:14:57.435 "nsid": 1, 00:14:57.435 "bdev_name": "Malloc2", 00:14:57.435 "name": "Malloc2", 00:14:57.435 "nguid": "F60608EE6D09447CBB989125AD8A7006", 00:14:57.435 "uuid": "f60608ee-6d09-447c-bb98-9125ad8a7006" 00:14:57.435 } 00:14:57.435 ] 00:14:57.435 } 00:14:57.435 ] 00:14:57.435 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1657242 00:14:57.435 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:57.435 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:57.435 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:57.435 10:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:57.435 [2024-11-19 10:42:04.867641] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:14:57.435 [2024-11-19 10:42:04.867674] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657478 ] 00:14:57.696 [2024-11-19 10:42:04.910566] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:57.696 [2024-11-19 10:42:04.919169] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.696 [2024-11-19 10:42:04.919194] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efc32304000 00:14:57.696 [2024-11-19 10:42:04.920170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.696 [2024-11-19 10:42:04.921178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.696 [2024-11-19 10:42:04.922187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.696 [2024-11-19 10:42:04.923191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.696 [2024-11-19 10:42:04.924208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.696 [2024-11-19 10:42:04.925215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.696 [2024-11-19 10:42:04.926225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.696 [2024-11-19 10:42:04.927227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.696 [2024-11-19 10:42:04.928239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.696 [2024-11-19 10:42:04.928250] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efc322f9000 00:14:57.696 [2024-11-19 10:42:04.929194] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.696 [2024-11-19 10:42:04.942724] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:57.696 [2024-11-19 10:42:04.942750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:57.696 [2024-11-19 10:42:04.944812] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:57.696 [2024-11-19 10:42:04.944853] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:57.696 [2024-11-19 10:42:04.944921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:57.696 [2024-11-19 10:42:04.944934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:57.696 [2024-11-19 10:42:04.944939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:57.696 [2024-11-19 10:42:04.945822] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:57.696 [2024-11-19 10:42:04.945832] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:57.696 [2024-11-19 10:42:04.945839] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:57.696 [2024-11-19 10:42:04.946827] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:57.696 [2024-11-19 10:42:04.946837] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:57.696 [2024-11-19 10:42:04.946843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:57.696 [2024-11-19 10:42:04.947838] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:57.696 [2024-11-19 10:42:04.947847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:57.696 [2024-11-19 10:42:04.948840] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:57.696 [2024-11-19 10:42:04.948850] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:57.696 [2024-11-19 10:42:04.948855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:57.696 [2024-11-19 10:42:04.948860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:57.696 [2024-11-19 10:42:04.948968] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:57.696 [2024-11-19 10:42:04.948973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:57.696 [2024-11-19 10:42:04.948977] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:57.696 [2024-11-19 10:42:04.949848] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:57.696 [2024-11-19 10:42:04.950851] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:57.696 [2024-11-19 10:42:04.951860] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:57.696 [2024-11-19 10:42:04.952861] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.696 [2024-11-19 10:42:04.952900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:57.696 [2024-11-19 10:42:04.953870] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:57.696 [2024-11-19 10:42:04.953880] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:57.696 [2024-11-19 10:42:04.953884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:57.696 [2024-11-19 10:42:04.953901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:57.696 [2024-11-19 10:42:04.953908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:57.696 [2024-11-19 10:42:04.953919] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.696 [2024-11-19 10:42:04.953924] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.696 [2024-11-19 10:42:04.953927] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.696 [2024-11-19 10:42:04.953938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.696 [2024-11-19 10:42:04.961957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:57.696 [2024-11-19 10:42:04.961970] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:57.696 [2024-11-19 10:42:04.961975] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:57.696 [2024-11-19 10:42:04.961979] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:57.696 [2024-11-19 10:42:04.961983] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:57.696 [2024-11-19 10:42:04.961990] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:57.696 [2024-11-19 10:42:04.961995] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:57.696 [2024-11-19 10:42:04.961999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.962008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.962018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:04.969955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:04.969968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.697 [2024-11-19 10:42:04.969978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.697 [2024-11-19 10:42:04.969985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.697 [2024-11-19 10:42:04.969995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.697 [2024-11-19 10:42:04.969999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.970008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.970017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:04.977954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:04.977965] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:57.697 [2024-11-19 10:42:04.977970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.977976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.977981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.977989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:04.985953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:04.986011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.986019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.986026] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:57.697 [2024-11-19 10:42:04.986030] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:57.697 [2024-11-19 10:42:04.986033] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.697 [2024-11-19 10:42:04.986040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:04.993954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:04.993966] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:57.697 [2024-11-19 10:42:04.993974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.993981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:04.993988] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.697 [2024-11-19 10:42:04.993992] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.697 [2024-11-19 10:42:04.993995] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.697 [2024-11-19 10:42:04.994000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.001952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.001969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.001977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.001983] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.697 [2024-11-19 10:42:05.001987] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.697 [2024-11-19 10:42:05.001991] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.697 [2024-11-19 10:42:05.001996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.009957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.009966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.009972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.009980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.009985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.009990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.009994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.009998] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:57.697 [2024-11-19 10:42:05.010002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:57.697 [2024-11-19 10:42:05.010007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:57.697 [2024-11-19 10:42:05.010023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.017955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.017968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.025953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.025966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.033954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.033966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.041953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.041969] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:57.697 [2024-11-19 10:42:05.041976] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:57.697 [2024-11-19 10:42:05.041979] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:57.697 [2024-11-19 10:42:05.041982] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:57.697 [2024-11-19 10:42:05.041985] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:57.697 [2024-11-19 10:42:05.041991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:57.697 [2024-11-19 10:42:05.041998] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:57.697 [2024-11-19 10:42:05.042002] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:57.697 [2024-11-19 10:42:05.042005] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.697 [2024-11-19 10:42:05.042010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.042016] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:57.697 [2024-11-19 10:42:05.042020] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.697 [2024-11-19 10:42:05.042023] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.697 [2024-11-19 10:42:05.042028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.042035] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:57.697 [2024-11-19 10:42:05.042039] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:57.697 [2024-11-19 10:42:05.042042] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.697 [2024-11-19 10:42:05.042047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:57.697 [2024-11-19 10:42:05.049954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.049968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.049979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:57.697 [2024-11-19 10:42:05.049985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:57.697 ===================================================== 00:14:57.697 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:57.697 ===================================================== 00:14:57.697 Controller Capabilities/Features 00:14:57.698 ================================ 00:14:57.698 Vendor ID: 4e58 00:14:57.698 Subsystem Vendor ID: 4e58 00:14:57.698 Serial Number: SPDK2 00:14:57.698 Model Number: SPDK bdev Controller 00:14:57.698 Firmware Version: 25.01 00:14:57.698 Recommended Arb Burst: 6 00:14:57.698 IEEE OUI Identifier: 8d 6b 50 00:14:57.698 Multi-path I/O 00:14:57.698 May have multiple subsystem ports: Yes 00:14:57.698 May have multiple controllers: Yes 00:14:57.698 Associated with SR-IOV VF: No 00:14:57.698 Max Data Transfer Size: 131072 00:14:57.698 Max Number of Namespaces: 32 00:14:57.698 Max Number of I/O Queues: 127 00:14:57.698 NVMe Specification Version (VS): 1.3 00:14:57.698 NVMe Specification Version (Identify): 1.3 00:14:57.698 Maximum Queue Entries: 256 00:14:57.698 Contiguous Queues Required: Yes 00:14:57.698 Arbitration Mechanisms Supported 00:14:57.698 Weighted Round Robin: Not Supported 00:14:57.698 Vendor Specific: Not Supported 00:14:57.698 Reset Timeout: 15000 ms 00:14:57.698 Doorbell Stride: 4 bytes 00:14:57.698 NVM Subsystem Reset: Not Supported 00:14:57.698 Command Sets Supported 00:14:57.698 NVM Command Set: Supported 00:14:57.698 Boot Partition: Not Supported 00:14:57.698 Memory Page Size Minimum: 4096 bytes 00:14:57.698 Memory Page Size Maximum: 4096 bytes 00:14:57.698 Persistent Memory Region: Not Supported 00:14:57.698 Optional Asynchronous Events Supported 00:14:57.698 Namespace Attribute Notices: Supported 00:14:57.698 Firmware Activation Notices: Not Supported 00:14:57.698 ANA Change Notices: Not Supported 00:14:57.698 PLE Aggregate Log Change Notices: Not Supported 00:14:57.698 LBA Status Info Alert Notices: Not Supported 00:14:57.698 EGE Aggregate Log Change Notices: Not Supported 00:14:57.698 Normal NVM Subsystem Shutdown event: Not Supported 00:14:57.698 Zone Descriptor Change Notices: Not Supported 00:14:57.698 Discovery Log Change Notices: Not Supported 00:14:57.698 Controller Attributes 00:14:57.698 128-bit Host Identifier: Supported 00:14:57.698 Non-Operational Permissive Mode: Not Supported 00:14:57.698 NVM Sets: Not Supported 00:14:57.698 Read Recovery Levels: Not Supported 00:14:57.698 Endurance Groups: Not Supported 00:14:57.698 Predictable Latency Mode: Not Supported 00:14:57.698 Traffic Based Keep ALive: Not Supported 00:14:57.698 Namespace Granularity: Not Supported 00:14:57.698 SQ Associations: Not Supported 00:14:57.698 UUID List: Not Supported 00:14:57.698 Multi-Domain Subsystem: Not Supported 00:14:57.698 Fixed Capacity Management: Not Supported 00:14:57.698 Variable Capacity Management: Not Supported 00:14:57.698 Delete Endurance Group: Not Supported 00:14:57.698 Delete NVM Set: Not Supported 00:14:57.698 Extended LBA Formats Supported: Not Supported 00:14:57.698 Flexible Data Placement Supported: Not Supported 00:14:57.698 00:14:57.698 Controller Memory Buffer Support 00:14:57.698 ================================ 00:14:57.698 Supported: No 00:14:57.698 00:14:57.698 Persistent Memory Region Support 00:14:57.698 ================================ 00:14:57.698 Supported: No 00:14:57.698 00:14:57.698 Admin Command Set Attributes 00:14:57.698 ============================ 00:14:57.698 Security Send/Receive: Not Supported 00:14:57.698 Format NVM: Not Supported 00:14:57.698 Firmware Activate/Download: Not Supported 00:14:57.698 Namespace Management: Not Supported 00:14:57.698 Device Self-Test: Not Supported 00:14:57.698 Directives: Not Supported 00:14:57.698 NVMe-MI: Not Supported 00:14:57.698 Virtualization Management: Not Supported 00:14:57.698 Doorbell Buffer Config: Not Supported 00:14:57.698 Get LBA Status Capability: Not Supported 00:14:57.698 Command & Feature Lockdown Capability: Not Supported 00:14:57.698 Abort Command Limit: 4 00:14:57.698 Async Event Request Limit: 4 00:14:57.698 Number of Firmware Slots: N/A 00:14:57.698 Firmware Slot 1 Read-Only: N/A 00:14:57.698 Firmware Activation Without Reset: N/A 00:14:57.698 Multiple Update Detection Support: N/A 00:14:57.698 Firmware Update Granularity: No Information Provided 00:14:57.698 Per-Namespace SMART Log: No 00:14:57.698 Asymmetric Namespace Access Log Page: Not Supported 00:14:57.698 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:57.698 Command Effects Log Page: Supported 00:14:57.698 Get Log Page Extended Data: Supported 00:14:57.698 Telemetry Log Pages: Not Supported 00:14:57.698 Persistent Event Log Pages: Not Supported 00:14:57.698 Supported Log Pages Log Page: May Support 00:14:57.698 Commands Supported & Effects Log Page: Not Supported 00:14:57.698 Feature Identifiers & Effects Log Page:May Support 00:14:57.698 NVMe-MI Commands & Effects Log Page: May Support 00:14:57.698 Data Area 4 for Telemetry Log: Not Supported 00:14:57.698 Error Log Page Entries Supported: 128 00:14:57.698 Keep Alive: Supported 00:14:57.698 Keep Alive Granularity: 10000 ms 00:14:57.698 00:14:57.698 NVM Command Set Attributes 00:14:57.698 ========================== 00:14:57.698 Submission Queue Entry Size 00:14:57.698 Max: 64 00:14:57.698 Min: 64 00:14:57.698 Completion Queue Entry Size 00:14:57.698 Max: 16 00:14:57.698 Min: 16 00:14:57.698 Number of Namespaces: 32 00:14:57.698 Compare Command: Supported 00:14:57.698 Write Uncorrectable Command: Not Supported 00:14:57.698 Dataset Management Command: Supported 00:14:57.698 Write Zeroes Command: Supported 00:14:57.698 Set Features Save Field: Not Supported 00:14:57.698 Reservations: Not Supported 00:14:57.698 Timestamp: Not Supported 00:14:57.698 Copy: Supported 00:14:57.698 Volatile Write Cache: Present 00:14:57.698 Atomic Write Unit (Normal): 1 00:14:57.698 Atomic Write Unit (PFail): 1 00:14:57.698 Atomic Compare & Write Unit: 1 00:14:57.698 Fused Compare & Write: Supported 00:14:57.698 Scatter-Gather List 00:14:57.698 SGL Command Set: Supported (Dword aligned) 00:14:57.698 SGL Keyed: Not Supported 00:14:57.698 SGL Bit Bucket Descriptor: Not Supported 00:14:57.698 SGL Metadata Pointer: Not Supported 00:14:57.698 Oversized SGL: Not Supported 00:14:57.698 SGL Metadata Address: Not Supported 00:14:57.698 SGL Offset: Not Supported 00:14:57.698 Transport SGL Data Block: Not Supported 00:14:57.698 Replay Protected Memory Block: Not Supported 00:14:57.698 00:14:57.698 Firmware Slot Information 00:14:57.698 ========================= 00:14:57.698 Active slot: 1 00:14:57.698 Slot 1 Firmware Revision: 25.01 00:14:57.698 00:14:57.698 00:14:57.698 Commands Supported and Effects 00:14:57.698 ============================== 00:14:57.698 Admin Commands 00:14:57.698 -------------- 00:14:57.698 Get Log Page (02h): Supported 00:14:57.698 Identify (06h): Supported 00:14:57.698 Abort (08h): Supported 00:14:57.698 Set Features (09h): Supported 00:14:57.698 Get Features (0Ah): Supported 00:14:57.698 Asynchronous Event Request (0Ch): Supported 00:14:57.698 Keep Alive (18h): Supported 00:14:57.698 I/O Commands 00:14:57.698 ------------ 00:14:57.698 Flush (00h): Supported LBA-Change 00:14:57.698 Write (01h): Supported LBA-Change 00:14:57.698 Read (02h): Supported 00:14:57.698 Compare (05h): Supported 00:14:57.698 Write Zeroes (08h): Supported LBA-Change 00:14:57.698 Dataset Management (09h): Supported LBA-Change 00:14:57.698 Copy (19h): Supported LBA-Change 00:14:57.698 00:14:57.698 Error Log 00:14:57.698 ========= 00:14:57.698 00:14:57.698 Arbitration 00:14:57.698 =========== 00:14:57.698 Arbitration Burst: 1 00:14:57.698 00:14:57.698 Power Management 00:14:57.698 ================ 00:14:57.698 Number of Power States: 1 00:14:57.698 Current Power State: Power State #0 00:14:57.698 Power State #0: 00:14:57.698 Max Power: 0.00 W 00:14:57.698 Non-Operational State: Operational 00:14:57.698 Entry Latency: Not Reported 00:14:57.698 Exit Latency: Not Reported 00:14:57.698 Relative Read Throughput: 0 00:14:57.699 Relative Read Latency: 0 00:14:57.699 Relative Write Throughput: 0 00:14:57.699 Relative Write Latency: 0 00:14:57.699 Idle Power: Not Reported 00:14:57.699 Active Power: Not Reported 00:14:57.699 Non-Operational Permissive Mode: Not Supported 00:14:57.699 00:14:57.699 Health Information 00:14:57.699 ================== 00:14:57.699 Critical Warnings: 00:14:57.699 Available Spare Space: OK 00:14:57.699 Temperature: OK 00:14:57.699 Device Reliability: OK 00:14:57.699 Read Only: No 00:14:57.699 Volatile Memory Backup: OK 00:14:57.699 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:57.699 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:57.699 Available Spare: 0% 00:14:57.699 Available Sp[2024-11-19 10:42:05.050073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:57.699 [2024-11-19 10:42:05.057954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:57.699 [2024-11-19 10:42:05.057981] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:57.699 [2024-11-19 10:42:05.057990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.699 [2024-11-19 10:42:05.057995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.699 [2024-11-19 10:42:05.058003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.699 [2024-11-19 10:42:05.058010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.699 [2024-11-19 10:42:05.058068] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:57.699 [2024-11-19 10:42:05.058080] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:57.699 [2024-11-19 10:42:05.059070] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.699 [2024-11-19 10:42:05.059116] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:57.699 [2024-11-19 10:42:05.059122] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:57.699 [2024-11-19 10:42:05.060077] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:57.699 [2024-11-19 10:42:05.060089] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:57.699 [2024-11-19 10:42:05.060136] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:57.699 [2024-11-19 10:42:05.061289] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.699 are Threshold: 0% 00:14:57.699 Life Percentage Used: 0% 00:14:57.699 Data Units Read: 0 00:14:57.699 Data Units Written: 0 00:14:57.699 Host Read Commands: 0 00:14:57.699 Host Write Commands: 0 00:14:57.699 Controller Busy Time: 0 minutes 00:14:57.699 Power Cycles: 0 00:14:57.699 Power On Hours: 0 hours 00:14:57.699 Unsafe Shutdowns: 0 00:14:57.699 Unrecoverable Media Errors: 0 00:14:57.699 Lifetime Error Log Entries: 0 00:14:57.699 Warning Temperature Time: 0 minutes 00:14:57.699 Critical Temperature Time: 0 minutes 00:14:57.699 00:14:57.699 Number of Queues 00:14:57.699 ================ 00:14:57.699 Number of I/O Submission Queues: 127 00:14:57.699 Number of I/O Completion Queues: 127 00:14:57.699 00:14:57.699 Active Namespaces 00:14:57.699 ================= 00:14:57.699 Namespace ID:1 00:14:57.699 Error Recovery Timeout: Unlimited 00:14:57.699 Command Set Identifier: NVM (00h) 00:14:57.699 Deallocate: Supported 00:14:57.699 Deallocated/Unwritten Error: Not Supported 00:14:57.699 Deallocated Read Value: Unknown 00:14:57.699 Deallocate in Write Zeroes: Not Supported 00:14:57.699 Deallocated Guard Field: 0xFFFF 00:14:57.699 Flush: Supported 00:14:57.699 Reservation: Supported 00:14:57.699 Namespace Sharing Capabilities: Multiple Controllers 00:14:57.699 Size (in LBAs): 131072 (0GiB) 00:14:57.699 Capacity (in LBAs): 131072 (0GiB) 00:14:57.699 Utilization (in LBAs): 131072 (0GiB) 00:14:57.699 NGUID: F60608EE6D09447CBB989125AD8A7006 00:14:57.699 UUID: f60608ee-6d09-447c-bb98-9125ad8a7006 00:14:57.699 Thin Provisioning: Not Supported 00:14:57.699 Per-NS Atomic Units: Yes 00:14:57.699 Atomic Boundary Size (Normal): 0 00:14:57.699 Atomic Boundary Size (PFail): 0 00:14:57.699 Atomic Boundary Offset: 0 00:14:57.699 Maximum Single Source Range Length: 65535 00:14:57.699 Maximum Copy Length: 65535 00:14:57.699 Maximum Source Range Count: 1 00:14:57.699 NGUID/EUI64 Never Reused: No 00:14:57.699 Namespace Write Protected: No 00:14:57.699 Number of LBA Formats: 1 00:14:57.699 Current LBA Format: LBA Format #00 00:14:57.699 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:57.699 00:14:57.699 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:57.958 [2024-11-19 10:42:05.298367] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.225 Initializing NVMe Controllers 00:15:03.225 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.225 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:03.225 Initialization complete. Launching workers. 00:15:03.225 ======================================================== 00:15:03.225 Latency(us) 00:15:03.225 Device Information : IOPS MiB/s Average min max 00:15:03.225 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39946.59 156.04 3204.10 961.86 6625.41 00:15:03.225 ======================================================== 00:15:03.225 Total : 39946.59 156.04 3204.10 961.86 6625.41 00:15:03.225 00:15:03.225 [2024-11-19 10:42:10.405228] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.225 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:03.225 [2024-11-19 10:42:10.644894] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.494 Initializing NVMe Controllers 00:15:08.494 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.494 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:08.495 Initialization complete. Launching workers. 00:15:08.495 ======================================================== 00:15:08.495 Latency(us) 00:15:08.495 Device Information : IOPS MiB/s Average min max 00:15:08.495 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39979.17 156.17 3201.69 972.25 6615.80 00:15:08.495 ======================================================== 00:15:08.495 Total : 39979.17 156.17 3201.69 972.25 6615.80 00:15:08.495 00:15:08.495 [2024-11-19 10:42:15.666004] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.495 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:08.495 [2024-11-19 10:42:15.881493] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.762 [2024-11-19 10:42:21.016039] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.762 Initializing NVMe Controllers 00:15:13.762 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.762 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.762 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:13.762 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:13.762 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:13.762 Initialization complete. Launching workers. 00:15:13.762 Starting thread on core 2 00:15:13.762 Starting thread on core 3 00:15:13.762 Starting thread on core 1 00:15:13.762 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:14.020 [2024-11-19 10:42:21.316448] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.307 [2024-11-19 10:42:24.387433] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.307 Initializing NVMe Controllers 00:15:17.307 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.307 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.307 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:17.307 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:17.307 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:17.307 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:17.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:17.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:17.307 Initialization complete. Launching workers. 00:15:17.307 Starting thread on core 1 with urgent priority queue 00:15:17.307 Starting thread on core 2 with urgent priority queue 00:15:17.307 Starting thread on core 3 with urgent priority queue 00:15:17.307 Starting thread on core 0 with urgent priority queue 00:15:17.307 SPDK bdev Controller (SPDK2 ) core 0: 7790.00 IO/s 12.84 secs/100000 ios 00:15:17.307 SPDK bdev Controller (SPDK2 ) core 1: 11663.33 IO/s 8.57 secs/100000 ios 00:15:17.307 SPDK bdev Controller (SPDK2 ) core 2: 8652.33 IO/s 11.56 secs/100000 ios 00:15:17.307 SPDK bdev Controller (SPDK2 ) core 3: 8138.67 IO/s 12.29 secs/100000 ios 00:15:17.307 ======================================================== 00:15:17.307 00:15:17.307 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:17.307 [2024-11-19 10:42:24.677389] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.307 Initializing NVMe Controllers 00:15:17.307 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.307 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.307 Namespace ID: 1 size: 0GB 00:15:17.307 Initialization complete. 00:15:17.307 INFO: using host memory buffer for IO 00:15:17.307 Hello world! 00:15:17.307 [2024-11-19 10:42:24.689473] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.307 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:17.566 [2024-11-19 10:42:24.978254] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.944 Initializing NVMe Controllers 00:15:18.944 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:18.944 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:18.944 Initialization complete. Launching workers. 00:15:18.944 submit (in ns) avg, min, max = 7894.5, 3273.9, 7147004.3 00:15:18.944 complete (in ns) avg, min, max = 20919.5, 1773.0, 4005667.0 00:15:18.944 00:15:18.944 Submit histogram 00:15:18.944 ================ 00:15:18.944 Range in us Cumulative Count 00:15:18.944 3.270 - 3.283: 0.0124% ( 2) 00:15:18.944 3.283 - 3.297: 0.0619% ( 8) 00:15:18.944 3.297 - 3.311: 0.1732% ( 18) 00:15:18.944 3.311 - 3.325: 0.3278% ( 25) 00:15:18.944 3.325 - 3.339: 1.2186% ( 144) 00:15:18.944 3.339 - 3.353: 4.5961% ( 546) 00:15:18.944 3.353 - 3.367: 9.9530% ( 866) 00:15:18.944 3.367 - 3.381: 15.8728% ( 957) 00:15:18.944 3.381 - 3.395: 22.0401% ( 997) 00:15:18.944 3.395 - 3.409: 28.7331% ( 1082) 00:15:18.944 3.409 - 3.423: 34.1643% ( 878) 00:15:18.944 3.423 - 3.437: 39.5088% ( 864) 00:15:18.944 3.437 - 3.450: 44.8596% ( 865) 00:15:18.944 3.450 - 3.464: 48.9484% ( 661) 00:15:18.944 3.464 - 3.478: 52.4743% ( 570) 00:15:18.944 3.478 - 3.492: 56.7425% ( 690) 00:15:18.944 3.492 - 3.506: 63.3181% ( 1063) 00:15:18.944 3.506 - 3.520: 69.5101% ( 1001) 00:15:18.944 3.520 - 3.534: 73.1906% ( 595) 00:15:18.944 3.534 - 3.548: 78.1146% ( 796) 00:15:18.944 3.548 - 3.562: 82.2281% ( 665) 00:15:18.944 3.562 - 3.590: 86.4530% ( 683) 00:15:18.944 3.590 - 3.617: 87.4304% ( 158) 00:15:18.944 3.617 - 3.645: 88.1789% ( 121) 00:15:18.944 3.645 - 3.673: 89.7810% ( 259) 00:15:18.944 3.673 - 3.701: 91.5687% ( 289) 00:15:18.944 3.701 - 3.729: 93.3874% ( 294) 00:15:18.944 3.729 - 3.757: 95.0266% ( 265) 00:15:18.944 3.757 - 3.784: 96.4865% ( 236) 00:15:18.944 3.784 - 3.812: 97.9030% ( 229) 00:15:18.944 3.812 - 3.840: 98.6824% ( 126) 00:15:18.944 3.840 - 3.868: 99.1402% ( 74) 00:15:18.944 3.868 - 3.896: 99.4680% ( 53) 00:15:18.944 3.896 - 3.923: 99.5546% ( 14) 00:15:18.944 3.923 - 3.951: 99.5732% ( 3) 00:15:18.944 3.951 - 3.979: 99.5855% ( 2) 00:15:18.944 3.979 - 4.007: 99.5917% ( 1) 00:15:18.944 4.035 - 4.063: 99.5979% ( 1) 00:15:18.944 4.063 - 4.090: 99.6103% ( 2) 00:15:18.944 4.090 - 4.118: 99.6227% ( 2) 00:15:18.944 4.174 - 4.202: 99.6289% ( 1) 00:15:18.944 5.315 - 5.343: 99.6350% ( 1) 00:15:18.944 5.593 - 5.621: 99.6412% ( 1) 00:15:18.944 5.704 - 5.732: 99.6474% ( 1) 00:15:18.944 5.871 - 5.899: 99.6536% ( 1) 00:15:18.944 6.066 - 6.094: 99.6598% ( 1) 00:15:18.944 6.177 - 6.205: 99.6660% ( 1) 00:15:18.944 6.289 - 6.317: 99.6783% ( 2) 00:15:18.944 6.344 - 6.372: 99.6845% ( 1) 00:15:18.944 6.372 - 6.400: 99.6907% ( 1) 00:15:18.944 6.400 - 6.428: 99.7031% ( 2) 00:15:18.944 6.483 - 6.511: 99.7093% ( 1) 00:15:18.944 6.567 - 6.595: 99.7155% ( 1) 00:15:18.944 6.650 - 6.678: 99.7216% ( 1) 00:15:18.944 6.678 - 6.706: 99.7278% ( 1) 00:15:18.944 6.762 - 6.790: 99.7340% ( 1) 00:15:18.944 6.929 - 6.957: 99.7402% ( 1) 00:15:18.944 6.957 - 6.984: 99.7464% ( 1) 00:15:18.944 7.012 - 7.040: 99.7526% ( 1) 00:15:18.944 7.096 - 7.123: 99.7588% ( 1) 00:15:18.944 7.123 - 7.179: 99.7649% ( 1) 00:15:18.944 7.346 - 7.402: 99.7897% ( 4) 00:15:18.944 7.402 - 7.457: 99.7959% ( 1) 00:15:18.944 7.457 - 7.513: 99.8082% ( 2) 00:15:18.944 7.513 - 7.569: 99.8144% ( 1) 00:15:18.944 7.569 - 7.624: 99.8206% ( 1) 00:15:18.944 7.736 - 7.791: 99.8268% ( 1) 00:15:18.944 7.847 - 7.903: 99.8330% ( 1) 00:15:18.944 7.903 - 7.958: 99.8392% ( 1) 00:15:18.944 8.014 - 8.070: 99.8454% ( 1) 00:15:18.944 8.181 - 8.237: 99.8515% ( 1) 00:15:18.944 8.237 - 8.292: 99.8577% ( 1) 00:15:18.944 8.459 - 8.515: 99.8639% ( 1) 00:15:18.944 8.570 - 8.626: 99.8701% ( 1) 00:15:18.944 8.682 - 8.737: 99.8763% ( 1) 00:15:18.944 8.793 - 8.849: 99.8887% ( 2) 00:15:18.944 9.350 - 9.405: 99.8948% ( 1) 00:15:18.944 3989.148 - 4017.642: 99.9938% ( 16) 00:15:18.944 7123.478 - 7151.972: 100.0000% ( 1) 00:15:18.944 00:15:18.944 [2024-11-19 10:42:26.072994] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.944 Complete histogram 00:15:18.944 ================== 00:15:18.944 Range in us Cumulative Count 00:15:18.944 1.767 - 1.774: 0.0062% ( 1) 00:15:18.944 1.809 - 1.823: 0.3093% ( 49) 00:15:18.944 1.823 - 1.837: 1.5526% ( 201) 00:15:18.944 1.837 - 1.850: 2.5114% ( 155) 00:15:18.944 1.850 - 1.864: 4.0208% ( 244) 00:15:18.944 1.864 - 1.878: 43.9626% ( 6457) 00:15:18.944 1.878 - 1.892: 86.4716% ( 6872) 00:15:18.944 1.892 - 1.906: 93.1585% ( 1081) 00:15:18.944 1.906 - 1.920: 96.1895% ( 490) 00:15:18.944 1.920 - 1.934: 96.7834% ( 96) 00:15:18.944 1.934 - 1.948: 97.5504% ( 124) 00:15:18.944 1.948 - 1.962: 98.4907% ( 152) 00:15:18.944 1.962 - 1.976: 99.0536% ( 91) 00:15:18.944 1.976 - 1.990: 99.1649% ( 18) 00:15:18.944 1.990 - 2.003: 99.1773% ( 2) 00:15:18.944 2.003 - 2.017: 99.1958% ( 3) 00:15:18.944 2.017 - 2.031: 99.2082% ( 2) 00:15:18.944 2.031 - 2.045: 99.2144% ( 1) 00:15:18.944 2.045 - 2.059: 99.2268% ( 2) 00:15:18.945 2.059 - 2.073: 99.2391% ( 2) 00:15:18.945 2.073 - 2.087: 99.2453% ( 1) 00:15:18.945 2.087 - 2.101: 99.2639% ( 3) 00:15:18.945 2.101 - 2.115: 99.2763% ( 2) 00:15:18.945 2.115 - 2.129: 99.2824% ( 1) 00:15:18.945 2.143 - 2.157: 99.2886% ( 1) 00:15:18.945 2.337 - 2.351: 99.2948% ( 1) 00:15:18.945 2.365 - 2.379: 99.3010% ( 1) 00:15:18.945 2.393 - 2.407: 99.3072% ( 1) 00:15:18.945 2.421 - 2.435: 99.3134% ( 1) 00:15:18.945 2.449 - 2.463: 99.3196% ( 1) 00:15:18.945 2.477 - 2.490: 99.3257% ( 1) 00:15:18.945 2.504 - 2.518: 99.3319% ( 1) 00:15:18.945 2.532 - 2.546: 99.3381% ( 1) 00:15:18.945 3.896 - 3.923: 99.3443% ( 1) 00:15:18.945 4.452 - 4.480: 99.3505% ( 1) 00:15:18.945 4.591 - 4.619: 99.3567% ( 1) 00:15:18.945 4.758 - 4.786: 99.3690% ( 2) 00:15:18.945 4.814 - 4.842: 99.3752% ( 1) 00:15:18.945 4.953 - 4.981: 99.3814% ( 1) 00:15:18.945 5.009 - 5.037: 99.3876% ( 1) 00:15:18.945 5.037 - 5.064: 99.3938% ( 1) 00:15:18.945 5.064 - 5.092: 99.4000% ( 1) 00:15:18.945 5.176 - 5.203: 99.4062% ( 1) 00:15:18.945 5.259 - 5.287: 99.4123% ( 1) 00:15:18.945 5.398 - 5.426: 99.4185% ( 1) 00:15:18.945 5.454 - 5.482: 99.4309% ( 2) 00:15:18.945 5.788 - 5.816: 99.4371% ( 1) 00:15:18.945 5.843 - 5.871: 99.4433% ( 1) 00:15:18.945 6.205 - 6.233: 99.4495% ( 1) 00:15:18.945 6.261 - 6.289: 99.4556% ( 1) 00:15:18.945 6.428 - 6.456: 99.4618% ( 1) 00:15:18.945 6.762 - 6.790: 99.4680% ( 1) 00:15:18.945 6.817 - 6.845: 99.4742% ( 1) 00:15:18.945 6.901 - 6.929: 99.4804% ( 1) 00:15:18.945 6.929 - 6.957: 99.4866% ( 1) 00:15:18.945 6.984 - 7.012: 99.4928% ( 1) 00:15:18.945 7.012 - 7.040: 99.4989% ( 1) 00:15:18.945 7.402 - 7.457: 99.5051% ( 1) 00:15:18.945 7.457 - 7.513: 99.5113% ( 1) 00:15:18.945 7.513 - 7.569: 99.5175% ( 1) 00:15:18.945 7.791 - 7.847: 99.5237% ( 1) 00:15:18.945 3989.148 - 4017.642: 100.0000% ( 77) 00:15:18.945 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.945 [ 00:15:18.945 { 00:15:18.945 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.945 "subtype": "Discovery", 00:15:18.945 "listen_addresses": [], 00:15:18.945 "allow_any_host": true, 00:15:18.945 "hosts": [] 00:15:18.945 }, 00:15:18.945 { 00:15:18.945 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.945 "subtype": "NVMe", 00:15:18.945 "listen_addresses": [ 00:15:18.945 { 00:15:18.945 "trtype": "VFIOUSER", 00:15:18.945 "adrfam": "IPv4", 00:15:18.945 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.945 "trsvcid": "0" 00:15:18.945 } 00:15:18.945 ], 00:15:18.945 "allow_any_host": true, 00:15:18.945 "hosts": [], 00:15:18.945 "serial_number": "SPDK1", 00:15:18.945 "model_number": "SPDK bdev Controller", 00:15:18.945 "max_namespaces": 32, 00:15:18.945 "min_cntlid": 1, 00:15:18.945 "max_cntlid": 65519, 00:15:18.945 "namespaces": [ 00:15:18.945 { 00:15:18.945 "nsid": 1, 00:15:18.945 "bdev_name": "Malloc1", 00:15:18.945 "name": "Malloc1", 00:15:18.945 "nguid": "049FDE8ACFA648269FEA10FCDCEC405B", 00:15:18.945 "uuid": "049fde8a-cfa6-4826-9fea-10fcdcec405b" 00:15:18.945 }, 00:15:18.945 { 00:15:18.945 "nsid": 2, 00:15:18.945 "bdev_name": "Malloc3", 00:15:18.945 "name": "Malloc3", 00:15:18.945 "nguid": "D3B22C50869942128F9F7AF5CEF178FA", 00:15:18.945 "uuid": "d3b22c50-8699-4212-8f9f-7af5cef178fa" 00:15:18.945 } 00:15:18.945 ] 00:15:18.945 }, 00:15:18.945 { 00:15:18.945 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.945 "subtype": "NVMe", 00:15:18.945 "listen_addresses": [ 00:15:18.945 { 00:15:18.945 "trtype": "VFIOUSER", 00:15:18.945 "adrfam": "IPv4", 00:15:18.945 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.945 "trsvcid": "0" 00:15:18.945 } 00:15:18.945 ], 00:15:18.945 "allow_any_host": true, 00:15:18.945 "hosts": [], 00:15:18.945 "serial_number": "SPDK2", 00:15:18.945 "model_number": "SPDK bdev Controller", 00:15:18.945 "max_namespaces": 32, 00:15:18.945 "min_cntlid": 1, 00:15:18.945 "max_cntlid": 65519, 00:15:18.945 "namespaces": [ 00:15:18.945 { 00:15:18.945 "nsid": 1, 00:15:18.945 "bdev_name": "Malloc2", 00:15:18.945 "name": "Malloc2", 00:15:18.945 "nguid": "F60608EE6D09447CBB989125AD8A7006", 00:15:18.945 "uuid": "f60608ee-6d09-447c-bb98-9125ad8a7006" 00:15:18.945 } 00:15:18.945 ] 00:15:18.945 } 00:15:18.945 ] 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1661317 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:18.945 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:19.204 [2024-11-19 10:42:26.468474] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.204 Malloc4 00:15:19.204 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:19.464 [2024-11-19 10:42:26.709303] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.464 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.464 Asynchronous Event Request test 00:15:19.464 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.464 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.464 Registering asynchronous event callbacks... 00:15:19.464 Starting namespace attribute notice tests for all controllers... 00:15:19.464 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:19.464 aer_cb - Changed Namespace 00:15:19.464 Cleaning up... 00:15:19.464 [ 00:15:19.464 { 00:15:19.464 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.464 "subtype": "Discovery", 00:15:19.464 "listen_addresses": [], 00:15:19.464 "allow_any_host": true, 00:15:19.464 "hosts": [] 00:15:19.464 }, 00:15:19.464 { 00:15:19.464 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:19.464 "subtype": "NVMe", 00:15:19.464 "listen_addresses": [ 00:15:19.464 { 00:15:19.464 "trtype": "VFIOUSER", 00:15:19.464 "adrfam": "IPv4", 00:15:19.464 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:19.464 "trsvcid": "0" 00:15:19.464 } 00:15:19.464 ], 00:15:19.464 "allow_any_host": true, 00:15:19.464 "hosts": [], 00:15:19.464 "serial_number": "SPDK1", 00:15:19.464 "model_number": "SPDK bdev Controller", 00:15:19.464 "max_namespaces": 32, 00:15:19.464 "min_cntlid": 1, 00:15:19.464 "max_cntlid": 65519, 00:15:19.464 "namespaces": [ 00:15:19.464 { 00:15:19.464 "nsid": 1, 00:15:19.464 "bdev_name": "Malloc1", 00:15:19.464 "name": "Malloc1", 00:15:19.464 "nguid": "049FDE8ACFA648269FEA10FCDCEC405B", 00:15:19.464 "uuid": "049fde8a-cfa6-4826-9fea-10fcdcec405b" 00:15:19.464 }, 00:15:19.464 { 00:15:19.464 "nsid": 2, 00:15:19.464 "bdev_name": "Malloc3", 00:15:19.464 "name": "Malloc3", 00:15:19.464 "nguid": "D3B22C50869942128F9F7AF5CEF178FA", 00:15:19.464 "uuid": "d3b22c50-8699-4212-8f9f-7af5cef178fa" 00:15:19.464 } 00:15:19.464 ] 00:15:19.464 }, 00:15:19.464 { 00:15:19.464 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:19.464 "subtype": "NVMe", 00:15:19.464 "listen_addresses": [ 00:15:19.464 { 00:15:19.464 "trtype": "VFIOUSER", 00:15:19.464 "adrfam": "IPv4", 00:15:19.464 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:19.464 "trsvcid": "0" 00:15:19.464 } 00:15:19.464 ], 00:15:19.464 "allow_any_host": true, 00:15:19.464 "hosts": [], 00:15:19.464 "serial_number": "SPDK2", 00:15:19.464 "model_number": "SPDK bdev Controller", 00:15:19.464 "max_namespaces": 32, 00:15:19.464 "min_cntlid": 1, 00:15:19.464 "max_cntlid": 65519, 00:15:19.464 "namespaces": [ 00:15:19.464 { 00:15:19.464 "nsid": 1, 00:15:19.464 "bdev_name": "Malloc2", 00:15:19.464 "name": "Malloc2", 00:15:19.464 "nguid": "F60608EE6D09447CBB989125AD8A7006", 00:15:19.464 "uuid": "f60608ee-6d09-447c-bb98-9125ad8a7006" 00:15:19.464 }, 00:15:19.464 { 00:15:19.464 "nsid": 2, 00:15:19.464 "bdev_name": "Malloc4", 00:15:19.464 "name": "Malloc4", 00:15:19.465 "nguid": "BB8F98EA8F3E4BC8AAD5A8874C894FC8", 00:15:19.465 "uuid": "bb8f98ea-8f3e-4bc8-aad5-a8874c894fc8" 00:15:19.465 } 00:15:19.465 ] 00:15:19.465 } 00:15:19.465 ] 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1661317 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1653155 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1653155 ']' 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1653155 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653155 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653155' 00:15:19.724 killing process with pid 1653155 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1653155 00:15:19.724 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1653155 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1661478 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1661478' 00:15:19.983 Process pid: 1661478 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1661478 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1661478 ']' 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.983 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:19.983 [2024-11-19 10:42:27.267811] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:19.983 [2024-11-19 10:42:27.268702] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:15:19.983 [2024-11-19 10:42:27.268742] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.983 [2024-11-19 10:42:27.345807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.983 [2024-11-19 10:42:27.385493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.983 [2024-11-19 10:42:27.385533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.983 [2024-11-19 10:42:27.385540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.983 [2024-11-19 10:42:27.385547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.983 [2024-11-19 10:42:27.385552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.983 [2024-11-19 10:42:27.387010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.983 [2024-11-19 10:42:27.387117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.983 [2024-11-19 10:42:27.387226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.983 [2024-11-19 10:42:27.387227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.241 [2024-11-19 10:42:27.455449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:20.241 [2024-11-19 10:42:27.456301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:20.241 [2024-11-19 10:42:27.456488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:20.241 [2024-11-19 10:42:27.456876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:20.241 [2024-11-19 10:42:27.456923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:20.241 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.241 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:20.241 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:21.178 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:21.437 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:21.437 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:21.437 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.437 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:21.437 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:21.696 Malloc1 00:15:21.696 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:21.696 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:21.955 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:22.214 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.214 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:22.214 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:22.473 Malloc2 00:15:22.473 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:22.730 10:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:22.730 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1661478 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1661478 ']' 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1661478 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661478 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661478' 00:15:22.988 killing process with pid 1661478 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1661478 00:15:22.988 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1661478 00:15:23.247 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:23.247 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:23.247 00:15:23.247 real 0m50.930s 00:15:23.247 user 3m16.895s 00:15:23.247 sys 0m3.337s 00:15:23.247 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.247 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:23.247 ************************************ 00:15:23.247 END TEST nvmf_vfio_user 00:15:23.247 ************************************ 00:15:23.248 10:42:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:23.248 10:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.248 10:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.248 10:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.248 ************************************ 00:15:23.248 START TEST nvmf_vfio_user_nvme_compliance 00:15:23.248 ************************************ 00:15:23.248 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:23.507 * Looking for test storage... 00:15:23.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:23.507 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.507 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.507 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.507 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.507 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.507 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.507 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.507 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.508 --rc genhtml_branch_coverage=1 00:15:23.508 --rc genhtml_function_coverage=1 00:15:23.508 --rc genhtml_legend=1 00:15:23.508 --rc geninfo_all_blocks=1 00:15:23.508 --rc geninfo_unexecuted_blocks=1 00:15:23.508 00:15:23.508 ' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.508 --rc genhtml_branch_coverage=1 00:15:23.508 --rc genhtml_function_coverage=1 00:15:23.508 --rc genhtml_legend=1 00:15:23.508 --rc geninfo_all_blocks=1 00:15:23.508 --rc geninfo_unexecuted_blocks=1 00:15:23.508 00:15:23.508 ' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.508 --rc genhtml_branch_coverage=1 00:15:23.508 --rc genhtml_function_coverage=1 00:15:23.508 --rc genhtml_legend=1 00:15:23.508 --rc geninfo_all_blocks=1 00:15:23.508 --rc geninfo_unexecuted_blocks=1 00:15:23.508 00:15:23.508 ' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.508 --rc genhtml_branch_coverage=1 00:15:23.508 --rc genhtml_function_coverage=1 00:15:23.508 --rc genhtml_legend=1 00:15:23.508 --rc geninfo_all_blocks=1 00:15:23.508 --rc geninfo_unexecuted_blocks=1 00:15:23.508 00:15:23.508 ' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.508 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1662097 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1662097' 00:15:23.509 Process pid: 1662097 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1662097 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1662097 ']' 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.509 10:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:23.509 [2024-11-19 10:42:30.943112] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:15:23.509 [2024-11-19 10:42:30.943161] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.768 [2024-11-19 10:42:31.019098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:23.768 [2024-11-19 10:42:31.060000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.768 [2024-11-19 10:42:31.060035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.768 [2024-11-19 10:42:31.060043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.768 [2024-11-19 10:42:31.060049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.768 [2024-11-19 10:42:31.060054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.768 [2024-11-19 10:42:31.061510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.768 [2024-11-19 10:42:31.061616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.768 [2024-11-19 10:42:31.061618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.768 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.768 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:23.768 10:42:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:25.145 malloc0 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.145 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:25.145 00:15:25.145 00:15:25.145 CUnit - A unit testing framework for C - Version 2.1-3 00:15:25.145 http://cunit.sourceforge.net/ 00:15:25.145 00:15:25.145 00:15:25.145 Suite: nvme_compliance 00:15:25.145 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 10:42:32.420872] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.145 [2024-11-19 10:42:32.422214] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:25.145 [2024-11-19 10:42:32.422231] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:25.145 [2024-11-19 10:42:32.422237] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:25.145 [2024-11-19 10:42:32.423893] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.145 passed 00:15:25.145 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 10:42:32.501453] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.145 [2024-11-19 10:42:32.504477] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.145 passed 00:15:25.145 Test: admin_identify_ns ...[2024-11-19 10:42:32.587455] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.404 [2024-11-19 10:42:32.647964] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:25.404 [2024-11-19 10:42:32.655965] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:25.404 [2024-11-19 10:42:32.677064] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.404 passed 00:15:25.404 Test: admin_get_features_mandatory_features ...[2024-11-19 10:42:32.751311] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.404 [2024-11-19 10:42:32.754334] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.404 passed 00:15:25.404 Test: admin_get_features_optional_features ...[2024-11-19 10:42:32.835878] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.404 [2024-11-19 10:42:32.838901] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.662 passed 00:15:25.662 Test: admin_set_features_number_of_queues ...[2024-11-19 10:42:32.915798] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.662 [2024-11-19 10:42:33.020036] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.662 passed 00:15:25.663 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 10:42:33.097890] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.663 [2024-11-19 10:42:33.100915] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.921 passed 00:15:25.921 Test: admin_get_log_page_with_lpo ...[2024-11-19 10:42:33.178856] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.921 [2024-11-19 10:42:33.248958] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:25.921 [2024-11-19 10:42:33.262012] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.921 passed 00:15:25.921 Test: fabric_property_get ...[2024-11-19 10:42:33.336043] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.921 [2024-11-19 10:42:33.337284] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:25.921 [2024-11-19 10:42:33.339061] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.921 passed 00:15:26.179 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 10:42:33.420590] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.179 [2024-11-19 10:42:33.421833] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:26.179 [2024-11-19 10:42:33.423613] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.179 passed 00:15:26.179 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 10:42:33.500979] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.179 [2024-11-19 10:42:33.584022] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:26.179 [2024-11-19 10:42:33.601953] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:26.179 [2024-11-19 10:42:33.607037] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.437 passed 00:15:26.437 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 10:42:33.684994] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.437 [2024-11-19 10:42:33.686238] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:26.437 [2024-11-19 10:42:33.690033] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.437 passed 00:15:26.437 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 10:42:33.766988] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.437 [2024-11-19 10:42:33.846959] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:26.437 [2024-11-19 10:42:33.870953] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:26.437 [2024-11-19 10:42:33.876047] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.694 passed 00:15:26.694 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 10:42:33.949210] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.694 [2024-11-19 10:42:33.950450] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:26.694 [2024-11-19 10:42:33.950474] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:26.694 [2024-11-19 10:42:33.952224] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.694 passed 00:15:26.694 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 10:42:34.030518] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.694 [2024-11-19 10:42:34.125968] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:26.694 [2024-11-19 10:42:34.133953] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:26.694 [2024-11-19 10:42:34.141954] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:26.952 [2024-11-19 10:42:34.149957] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:26.952 [2024-11-19 10:42:34.179049] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.952 passed 00:15:26.952 Test: admin_create_io_sq_verify_pc ...[2024-11-19 10:42:34.254206] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.952 [2024-11-19 10:42:34.270963] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:26.952 [2024-11-19 10:42:34.288396] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.952 passed 00:15:26.952 Test: admin_create_io_qp_max_qps ...[2024-11-19 10:42:34.368955] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.328 [2024-11-19 10:42:35.474959] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:28.586 [2024-11-19 10:42:35.851624] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.586 passed 00:15:28.586 Test: admin_create_io_sq_shared_cq ...[2024-11-19 10:42:35.929706] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.845 [2024-11-19 10:42:36.060953] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:28.845 [2024-11-19 10:42:36.098007] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.845 passed 00:15:28.845 00:15:28.845 Run Summary: Type Total Ran Passed Failed Inactive 00:15:28.845 suites 1 1 n/a 0 0 00:15:28.845 tests 18 18 18 0 0 00:15:28.845 asserts 360 360 360 0 n/a 00:15:28.845 00:15:28.845 Elapsed time = 1.514 seconds 00:15:28.845 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1662097 00:15:28.845 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1662097 ']' 00:15:28.845 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1662097 00:15:28.845 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:28.845 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.846 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662097 00:15:28.846 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.846 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.846 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662097' 00:15:28.846 killing process with pid 1662097 00:15:28.846 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1662097 00:15:28.846 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1662097 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:29.104 00:15:29.104 real 0m5.691s 00:15:29.104 user 0m15.890s 00:15:29.104 sys 0m0.529s 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.104 ************************************ 00:15:29.104 END TEST nvmf_vfio_user_nvme_compliance 00:15:29.104 ************************************ 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.104 ************************************ 00:15:29.104 START TEST nvmf_vfio_user_fuzz 00:15:29.104 ************************************ 00:15:29.104 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:29.104 * Looking for test storage... 00:15:29.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.105 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:29.105 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:29.105 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:29.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.363 --rc genhtml_branch_coverage=1 00:15:29.363 --rc genhtml_function_coverage=1 00:15:29.363 --rc genhtml_legend=1 00:15:29.363 --rc geninfo_all_blocks=1 00:15:29.363 --rc geninfo_unexecuted_blocks=1 00:15:29.363 00:15:29.363 ' 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:29.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.363 --rc genhtml_branch_coverage=1 00:15:29.363 --rc genhtml_function_coverage=1 00:15:29.363 --rc genhtml_legend=1 00:15:29.363 --rc geninfo_all_blocks=1 00:15:29.363 --rc geninfo_unexecuted_blocks=1 00:15:29.363 00:15:29.363 ' 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:29.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.363 --rc genhtml_branch_coverage=1 00:15:29.363 --rc genhtml_function_coverage=1 00:15:29.363 --rc genhtml_legend=1 00:15:29.363 --rc geninfo_all_blocks=1 00:15:29.363 --rc geninfo_unexecuted_blocks=1 00:15:29.363 00:15:29.363 ' 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:29.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.363 --rc genhtml_branch_coverage=1 00:15:29.363 --rc genhtml_function_coverage=1 00:15:29.363 --rc genhtml_legend=1 00:15:29.363 --rc geninfo_all_blocks=1 00:15:29.363 --rc geninfo_unexecuted_blocks=1 00:15:29.363 00:15:29.363 ' 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.363 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1663086 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1663086' 00:15:29.364 Process pid: 1663086 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1663086 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1663086 ']' 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.364 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.623 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.623 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:29.623 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.559 malloc0 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.559 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.560 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:30.560 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:02.636 Fuzzing completed. Shutting down the fuzz application 00:16:02.636 00:16:02.636 Dumping successful admin opcodes: 00:16:02.636 8, 9, 10, 24, 00:16:02.636 Dumping successful io opcodes: 00:16:02.636 0, 00:16:02.636 NS: 0x20000081ef00 I/O qp, Total commands completed: 1034652, total successful commands: 4079, random_seed: 2676060480 00:16:02.636 NS: 0x20000081ef00 admin qp, Total commands completed: 257294, total successful commands: 2076, random_seed: 553448320 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1663086 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1663086 ']' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1663086 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663086 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663086' 00:16:02.636 killing process with pid 1663086 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1663086 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1663086 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:02.636 00:16:02.636 real 0m32.212s 00:16:02.636 user 0m29.893s 00:16:02.636 sys 0m32.059s 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.636 ************************************ 00:16:02.636 END TEST nvmf_vfio_user_fuzz 00:16:02.636 ************************************ 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.636 ************************************ 00:16:02.636 START TEST nvmf_auth_target 00:16:02.636 ************************************ 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:02.636 * Looking for test storage... 00:16:02.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.636 --rc genhtml_branch_coverage=1 00:16:02.636 --rc genhtml_function_coverage=1 00:16:02.636 --rc genhtml_legend=1 00:16:02.636 --rc geninfo_all_blocks=1 00:16:02.636 --rc geninfo_unexecuted_blocks=1 00:16:02.636 00:16:02.636 ' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.636 --rc genhtml_branch_coverage=1 00:16:02.636 --rc genhtml_function_coverage=1 00:16:02.636 --rc genhtml_legend=1 00:16:02.636 --rc geninfo_all_blocks=1 00:16:02.636 --rc geninfo_unexecuted_blocks=1 00:16:02.636 00:16:02.636 ' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.636 --rc genhtml_branch_coverage=1 00:16:02.636 --rc genhtml_function_coverage=1 00:16:02.636 --rc genhtml_legend=1 00:16:02.636 --rc geninfo_all_blocks=1 00:16:02.636 --rc geninfo_unexecuted_blocks=1 00:16:02.636 00:16:02.636 ' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.636 --rc genhtml_branch_coverage=1 00:16:02.636 --rc genhtml_function_coverage=1 00:16:02.636 --rc genhtml_legend=1 00:16:02.636 --rc geninfo_all_blocks=1 00:16:02.636 --rc geninfo_unexecuted_blocks=1 00:16:02.636 00:16:02.636 ' 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.636 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:02.637 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:07.911 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:07.911 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.911 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:07.912 Found net devices under 0000:86:00.0: cvl_0_0 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:07.912 Found net devices under 0000:86:00.1: cvl_0_1 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:07.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:16:07.912 00:16:07.912 --- 10.0.0.2 ping statistics --- 00:16:07.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.912 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:16:07.912 00:16:07.912 --- 10.0.0.1 ping statistics --- 00:16:07.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.912 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1671396 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1671396 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1671396 ']' 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.912 10:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1671575 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=489ff530ba381bbb4b51dae8a1fcdba951fd7d270c9da6b5 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:07.912 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.359 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 489ff530ba381bbb4b51dae8a1fcdba951fd7d270c9da6b5 0 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 489ff530ba381bbb4b51dae8a1fcdba951fd7d270c9da6b5 0 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=489ff530ba381bbb4b51dae8a1fcdba951fd7d270c9da6b5 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.359 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.359 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.359 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b9f88b4a15ec42a6fe6fe2807db719d30e77ae76987104ac320b9589cacdd727 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9KO 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b9f88b4a15ec42a6fe6fe2807db719d30e77ae76987104ac320b9589cacdd727 3 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b9f88b4a15ec42a6fe6fe2807db719d30e77ae76987104ac320b9589cacdd727 3 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b9f88b4a15ec42a6fe6fe2807db719d30e77ae76987104ac320b9589cacdd727 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9KO 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9KO 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.9KO 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b531debc5c2ecec3fa024b9e04dc7e8e 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HHa 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b531debc5c2ecec3fa024b9e04dc7e8e 1 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b531debc5c2ecec3fa024b9e04dc7e8e 1 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b531debc5c2ecec3fa024b9e04dc7e8e 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:07.913 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HHa 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HHa 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.HHa 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a92db619574cd8240b902aea644d1cfbf84f9d25cf1db8c7 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.tSb 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a92db619574cd8240b902aea644d1cfbf84f9d25cf1db8c7 2 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a92db619574cd8240b902aea644d1cfbf84f9d25cf1db8c7 2 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a92db619574cd8240b902aea644d1cfbf84f9d25cf1db8c7 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.tSb 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.tSb 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.tSb 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:08.172 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ecce34c80fb470c576a001bc61e4a5c720091f9dce817380 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.O7g 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ecce34c80fb470c576a001bc61e4a5c720091f9dce817380 2 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ecce34c80fb470c576a001bc61e4a5c720091f9dce817380 2 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ecce34c80fb470c576a001bc61e4a5c720091f9dce817380 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.O7g 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.O7g 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.O7g 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bcd742c6777288c0c9340932857c0468 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qj1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bcd742c6777288c0c9340932857c0468 1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bcd742c6777288c0c9340932857c0468 1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bcd742c6777288c0c9340932857c0468 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qj1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qj1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.qj1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fb98ab2aacca8939e23f22bc5e501a94199d030ae8eaee7fa26354e66050c72a 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6KP 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fb98ab2aacca8939e23f22bc5e501a94199d030ae8eaee7fa26354e66050c72a 3 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fb98ab2aacca8939e23f22bc5e501a94199d030ae8eaee7fa26354e66050c72a 3 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fb98ab2aacca8939e23f22bc5e501a94199d030ae8eaee7fa26354e66050c72a 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6KP 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6KP 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.6KP 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1671396 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1671396 ']' 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.173 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1671575 /var/tmp/host.sock 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1671575 ']' 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:08.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.432 10:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.359 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.359 00:16:08.690 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.359 00:16:08.948 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.9KO ]] 00:16:08.948 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9KO 00:16:08.948 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.948 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.948 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.948 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9KO 00:16:08.948 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9KO 00:16:09.206 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:09.206 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HHa 00:16:09.206 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.206 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.206 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.206 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HHa 00:16:09.206 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HHa 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.tSb ]] 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tSb 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tSb 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tSb 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.O7g 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.O7g 00:16:09.464 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.O7g 00:16:09.722 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.qj1 ]] 00:16:09.722 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qj1 00:16:09.722 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.723 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.723 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.723 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qj1 00:16:09.723 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qj1 00:16:10.010 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:10.010 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6KP 00:16:10.010 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.010 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.010 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.010 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.6KP 00:16:10.010 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.6KP 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.281 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.574 00:16:10.574 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.574 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.574 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.832 { 00:16:10.832 "cntlid": 1, 00:16:10.832 "qid": 0, 00:16:10.832 "state": "enabled", 00:16:10.832 "thread": "nvmf_tgt_poll_group_000", 00:16:10.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.832 "listen_address": { 00:16:10.832 "trtype": "TCP", 00:16:10.832 "adrfam": "IPv4", 00:16:10.832 "traddr": "10.0.0.2", 00:16:10.832 "trsvcid": "4420" 00:16:10.832 }, 00:16:10.832 "peer_address": { 00:16:10.832 "trtype": "TCP", 00:16:10.832 "adrfam": "IPv4", 00:16:10.832 "traddr": "10.0.0.1", 00:16:10.832 "trsvcid": "45874" 00:16:10.832 }, 00:16:10.832 "auth": { 00:16:10.832 "state": "completed", 00:16:10.832 "digest": "sha256", 00:16:10.832 "dhgroup": "null" 00:16:10.832 } 00:16:10.832 } 00:16:10.832 ]' 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.832 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.091 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:11.091 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:11.658 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.658 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.658 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.658 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.658 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.658 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.658 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.658 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.915 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.173 00:16:12.173 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.173 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.173 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.432 { 00:16:12.432 "cntlid": 3, 00:16:12.432 "qid": 0, 00:16:12.432 "state": "enabled", 00:16:12.432 "thread": "nvmf_tgt_poll_group_000", 00:16:12.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.432 "listen_address": { 00:16:12.432 "trtype": "TCP", 00:16:12.432 "adrfam": "IPv4", 00:16:12.432 "traddr": "10.0.0.2", 00:16:12.432 "trsvcid": "4420" 00:16:12.432 }, 00:16:12.432 "peer_address": { 00:16:12.432 "trtype": "TCP", 00:16:12.432 "adrfam": "IPv4", 00:16:12.432 "traddr": "10.0.0.1", 00:16:12.432 "trsvcid": "45910" 00:16:12.432 }, 00:16:12.432 "auth": { 00:16:12.432 "state": "completed", 00:16:12.432 "digest": "sha256", 00:16:12.432 "dhgroup": "null" 00:16:12.432 } 00:16:12.432 } 00:16:12.432 ]' 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.432 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.433 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.433 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.691 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:12.691 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:13.258 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.258 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.258 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.258 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.258 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.258 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.258 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.258 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.516 10:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.774 00:16:13.774 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.774 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.774 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.033 { 00:16:14.033 "cntlid": 5, 00:16:14.033 "qid": 0, 00:16:14.033 "state": "enabled", 00:16:14.033 "thread": "nvmf_tgt_poll_group_000", 00:16:14.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.033 "listen_address": { 00:16:14.033 "trtype": "TCP", 00:16:14.033 "adrfam": "IPv4", 00:16:14.033 "traddr": "10.0.0.2", 00:16:14.033 "trsvcid": "4420" 00:16:14.033 }, 00:16:14.033 "peer_address": { 00:16:14.033 "trtype": "TCP", 00:16:14.033 "adrfam": "IPv4", 00:16:14.033 "traddr": "10.0.0.1", 00:16:14.033 "trsvcid": "45938" 00:16:14.033 }, 00:16:14.033 "auth": { 00:16:14.033 "state": "completed", 00:16:14.033 "digest": "sha256", 00:16:14.033 "dhgroup": "null" 00:16:14.033 } 00:16:14.033 } 00:16:14.033 ]' 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.033 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.292 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:14.292 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:14.858 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.858 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.858 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.859 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.859 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.859 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.859 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.859 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.117 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.375 00:16:15.375 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.375 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.375 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.634 { 00:16:15.634 "cntlid": 7, 00:16:15.634 "qid": 0, 00:16:15.634 "state": "enabled", 00:16:15.634 "thread": "nvmf_tgt_poll_group_000", 00:16:15.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.634 "listen_address": { 00:16:15.634 "trtype": "TCP", 00:16:15.634 "adrfam": "IPv4", 00:16:15.634 "traddr": "10.0.0.2", 00:16:15.634 "trsvcid": "4420" 00:16:15.634 }, 00:16:15.634 "peer_address": { 00:16:15.634 "trtype": "TCP", 00:16:15.634 "adrfam": "IPv4", 00:16:15.634 "traddr": "10.0.0.1", 00:16:15.634 "trsvcid": "45958" 00:16:15.634 }, 00:16:15.634 "auth": { 00:16:15.634 "state": "completed", 00:16:15.634 "digest": "sha256", 00:16:15.634 "dhgroup": "null" 00:16:15.634 } 00:16:15.634 } 00:16:15.634 ]' 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.634 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:15.634 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.634 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.634 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.634 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.893 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:15.893 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.462 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.722 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.980 00:16:16.980 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.980 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.980 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.239 { 00:16:17.239 "cntlid": 9, 00:16:17.239 "qid": 0, 00:16:17.239 "state": "enabled", 00:16:17.239 "thread": "nvmf_tgt_poll_group_000", 00:16:17.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.239 "listen_address": { 00:16:17.239 "trtype": "TCP", 00:16:17.239 "adrfam": "IPv4", 00:16:17.239 "traddr": "10.0.0.2", 00:16:17.239 "trsvcid": "4420" 00:16:17.239 }, 00:16:17.239 "peer_address": { 00:16:17.239 "trtype": "TCP", 00:16:17.239 "adrfam": "IPv4", 00:16:17.239 "traddr": "10.0.0.1", 00:16:17.239 "trsvcid": "45548" 00:16:17.239 }, 00:16:17.239 "auth": { 00:16:17.239 "state": "completed", 00:16:17.239 "digest": "sha256", 00:16:17.239 "dhgroup": "ffdhe2048" 00:16:17.239 } 00:16:17.239 } 00:16:17.239 ]' 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.239 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.498 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:17.498 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:18.065 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.065 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.065 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.065 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.065 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.065 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.065 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.065 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.324 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.583 00:16:18.583 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.583 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.583 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.842 { 00:16:18.842 "cntlid": 11, 00:16:18.842 "qid": 0, 00:16:18.842 "state": "enabled", 00:16:18.842 "thread": "nvmf_tgt_poll_group_000", 00:16:18.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.842 "listen_address": { 00:16:18.842 "trtype": "TCP", 00:16:18.842 "adrfam": "IPv4", 00:16:18.842 "traddr": "10.0.0.2", 00:16:18.842 "trsvcid": "4420" 00:16:18.842 }, 00:16:18.842 "peer_address": { 00:16:18.842 "trtype": "TCP", 00:16:18.842 "adrfam": "IPv4", 00:16:18.842 "traddr": "10.0.0.1", 00:16:18.842 "trsvcid": "45582" 00:16:18.842 }, 00:16:18.842 "auth": { 00:16:18.842 "state": "completed", 00:16:18.842 "digest": "sha256", 00:16:18.842 "dhgroup": "ffdhe2048" 00:16:18.842 } 00:16:18.842 } 00:16:18.842 ]' 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.842 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.101 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:19.101 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:19.668 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.668 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.668 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.668 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.668 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.668 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.668 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:19.668 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.927 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.199 00:16:20.199 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.199 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.199 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.464 { 00:16:20.464 "cntlid": 13, 00:16:20.464 "qid": 0, 00:16:20.464 "state": "enabled", 00:16:20.464 "thread": "nvmf_tgt_poll_group_000", 00:16:20.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.464 "listen_address": { 00:16:20.464 "trtype": "TCP", 00:16:20.464 "adrfam": "IPv4", 00:16:20.464 "traddr": "10.0.0.2", 00:16:20.464 "trsvcid": "4420" 00:16:20.464 }, 00:16:20.464 "peer_address": { 00:16:20.464 "trtype": "TCP", 00:16:20.464 "adrfam": "IPv4", 00:16:20.464 "traddr": "10.0.0.1", 00:16:20.464 "trsvcid": "45596" 00:16:20.464 }, 00:16:20.464 "auth": { 00:16:20.464 "state": "completed", 00:16:20.464 "digest": "sha256", 00:16:20.464 "dhgroup": "ffdhe2048" 00:16:20.464 } 00:16:20.464 } 00:16:20.464 ]' 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.464 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.722 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:20.723 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:21.289 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.289 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.289 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.289 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.289 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.289 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.289 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.289 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.547 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.806 00:16:21.806 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.806 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.806 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.064 { 00:16:22.064 "cntlid": 15, 00:16:22.064 "qid": 0, 00:16:22.064 "state": "enabled", 00:16:22.064 "thread": "nvmf_tgt_poll_group_000", 00:16:22.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.064 "listen_address": { 00:16:22.064 "trtype": "TCP", 00:16:22.064 "adrfam": "IPv4", 00:16:22.064 "traddr": "10.0.0.2", 00:16:22.064 "trsvcid": "4420" 00:16:22.064 }, 00:16:22.064 "peer_address": { 00:16:22.064 "trtype": "TCP", 00:16:22.064 "adrfam": "IPv4", 00:16:22.064 "traddr": "10.0.0.1", 00:16:22.064 "trsvcid": "45618" 00:16:22.064 }, 00:16:22.064 "auth": { 00:16:22.064 "state": "completed", 00:16:22.064 "digest": "sha256", 00:16:22.064 "dhgroup": "ffdhe2048" 00:16:22.064 } 00:16:22.064 } 00:16:22.064 ]' 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.064 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.323 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:22.323 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:22.891 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.150 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.409 00:16:23.409 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.409 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.409 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.668 { 00:16:23.668 "cntlid": 17, 00:16:23.668 "qid": 0, 00:16:23.668 "state": "enabled", 00:16:23.668 "thread": "nvmf_tgt_poll_group_000", 00:16:23.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.668 "listen_address": { 00:16:23.668 "trtype": "TCP", 00:16:23.668 "adrfam": "IPv4", 00:16:23.668 "traddr": "10.0.0.2", 00:16:23.668 "trsvcid": "4420" 00:16:23.668 }, 00:16:23.668 "peer_address": { 00:16:23.668 "trtype": "TCP", 00:16:23.668 "adrfam": "IPv4", 00:16:23.668 "traddr": "10.0.0.1", 00:16:23.668 "trsvcid": "45636" 00:16:23.668 }, 00:16:23.668 "auth": { 00:16:23.668 "state": "completed", 00:16:23.668 "digest": "sha256", 00:16:23.668 "dhgroup": "ffdhe3072" 00:16:23.668 } 00:16:23.668 } 00:16:23.668 ]' 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.668 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.668 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.668 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.668 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.668 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.668 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.926 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:23.926 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:24.493 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.493 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.493 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.493 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.493 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.493 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.493 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.493 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.751 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.010 00:16:25.010 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.010 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.010 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.268 { 00:16:25.268 "cntlid": 19, 00:16:25.268 "qid": 0, 00:16:25.268 "state": "enabled", 00:16:25.268 "thread": "nvmf_tgt_poll_group_000", 00:16:25.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.268 "listen_address": { 00:16:25.268 "trtype": "TCP", 00:16:25.268 "adrfam": "IPv4", 00:16:25.268 "traddr": "10.0.0.2", 00:16:25.268 "trsvcid": "4420" 00:16:25.268 }, 00:16:25.268 "peer_address": { 00:16:25.268 "trtype": "TCP", 00:16:25.268 "adrfam": "IPv4", 00:16:25.268 "traddr": "10.0.0.1", 00:16:25.268 "trsvcid": "45646" 00:16:25.268 }, 00:16:25.268 "auth": { 00:16:25.268 "state": "completed", 00:16:25.268 "digest": "sha256", 00:16:25.268 "dhgroup": "ffdhe3072" 00:16:25.268 } 00:16:25.268 } 00:16:25.268 ]' 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.268 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.527 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:25.527 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:26.093 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.093 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.093 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.093 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.093 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.093 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.093 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.094 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.355 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:26.355 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.355 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.355 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.355 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.355 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.356 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.356 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.356 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.356 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.356 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.356 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.356 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.614 00:16:26.614 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.614 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.614 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.872 { 00:16:26.872 "cntlid": 21, 00:16:26.872 "qid": 0, 00:16:26.872 "state": "enabled", 00:16:26.872 "thread": "nvmf_tgt_poll_group_000", 00:16:26.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.872 "listen_address": { 00:16:26.872 "trtype": "TCP", 00:16:26.872 "adrfam": "IPv4", 00:16:26.872 "traddr": "10.0.0.2", 00:16:26.872 "trsvcid": "4420" 00:16:26.872 }, 00:16:26.872 "peer_address": { 00:16:26.872 "trtype": "TCP", 00:16:26.872 "adrfam": "IPv4", 00:16:26.872 "traddr": "10.0.0.1", 00:16:26.872 "trsvcid": "54226" 00:16:26.872 }, 00:16:26.872 "auth": { 00:16:26.872 "state": "completed", 00:16:26.872 "digest": "sha256", 00:16:26.872 "dhgroup": "ffdhe3072" 00:16:26.872 } 00:16:26.872 } 00:16:26.872 ]' 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.872 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.131 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:27.131 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:27.698 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.698 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.698 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.698 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.698 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.698 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.698 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.698 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.956 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.215 00:16:28.215 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.215 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.215 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.474 { 00:16:28.474 "cntlid": 23, 00:16:28.474 "qid": 0, 00:16:28.474 "state": "enabled", 00:16:28.474 "thread": "nvmf_tgt_poll_group_000", 00:16:28.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.474 "listen_address": { 00:16:28.474 "trtype": "TCP", 00:16:28.474 "adrfam": "IPv4", 00:16:28.474 "traddr": "10.0.0.2", 00:16:28.474 "trsvcid": "4420" 00:16:28.474 }, 00:16:28.474 "peer_address": { 00:16:28.474 "trtype": "TCP", 00:16:28.474 "adrfam": "IPv4", 00:16:28.474 "traddr": "10.0.0.1", 00:16:28.474 "trsvcid": "54242" 00:16:28.474 }, 00:16:28.474 "auth": { 00:16:28.474 "state": "completed", 00:16:28.474 "digest": "sha256", 00:16:28.474 "dhgroup": "ffdhe3072" 00:16:28.474 } 00:16:28.474 } 00:16:28.474 ]' 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.474 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.732 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:28.733 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.300 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.559 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.818 00:16:29.818 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.818 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.818 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.077 { 00:16:30.077 "cntlid": 25, 00:16:30.077 "qid": 0, 00:16:30.077 "state": "enabled", 00:16:30.077 "thread": "nvmf_tgt_poll_group_000", 00:16:30.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.077 "listen_address": { 00:16:30.077 "trtype": "TCP", 00:16:30.077 "adrfam": "IPv4", 00:16:30.077 "traddr": "10.0.0.2", 00:16:30.077 "trsvcid": "4420" 00:16:30.077 }, 00:16:30.077 "peer_address": { 00:16:30.077 "trtype": "TCP", 00:16:30.077 "adrfam": "IPv4", 00:16:30.077 "traddr": "10.0.0.1", 00:16:30.077 "trsvcid": "54268" 00:16:30.077 }, 00:16:30.077 "auth": { 00:16:30.077 "state": "completed", 00:16:30.077 "digest": "sha256", 00:16:30.077 "dhgroup": "ffdhe4096" 00:16:30.077 } 00:16:30.077 } 00:16:30.077 ]' 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.077 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.335 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:30.335 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:30.901 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.901 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.901 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.901 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.901 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.901 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.901 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.901 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.159 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:31.159 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.159 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.159 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.160 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.418 00:16:31.418 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.418 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.418 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.677 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.677 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.677 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.677 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.677 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.677 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.677 { 00:16:31.677 "cntlid": 27, 00:16:31.677 "qid": 0, 00:16:31.677 "state": "enabled", 00:16:31.677 "thread": "nvmf_tgt_poll_group_000", 00:16:31.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.677 "listen_address": { 00:16:31.677 "trtype": "TCP", 00:16:31.677 "adrfam": "IPv4", 00:16:31.677 "traddr": "10.0.0.2", 00:16:31.677 "trsvcid": "4420" 00:16:31.677 }, 00:16:31.677 "peer_address": { 00:16:31.677 "trtype": "TCP", 00:16:31.677 "adrfam": "IPv4", 00:16:31.677 "traddr": "10.0.0.1", 00:16:31.677 "trsvcid": "54288" 00:16:31.677 }, 00:16:31.677 "auth": { 00:16:31.677 "state": "completed", 00:16:31.677 "digest": "sha256", 00:16:31.677 "dhgroup": "ffdhe4096" 00:16:31.677 } 00:16:31.677 } 00:16:31.677 ]' 00:16:31.677 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.677 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.677 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.677 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.677 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.677 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.677 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.677 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.934 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:31.934 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:32.501 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.501 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.501 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.501 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.501 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.501 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.501 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.501 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.760 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.019 00:16:33.019 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.019 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.019 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.277 { 00:16:33.277 "cntlid": 29, 00:16:33.277 "qid": 0, 00:16:33.277 "state": "enabled", 00:16:33.277 "thread": "nvmf_tgt_poll_group_000", 00:16:33.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.277 "listen_address": { 00:16:33.277 "trtype": "TCP", 00:16:33.277 "adrfam": "IPv4", 00:16:33.277 "traddr": "10.0.0.2", 00:16:33.277 "trsvcid": "4420" 00:16:33.277 }, 00:16:33.277 "peer_address": { 00:16:33.277 "trtype": "TCP", 00:16:33.277 "adrfam": "IPv4", 00:16:33.277 "traddr": "10.0.0.1", 00:16:33.277 "trsvcid": "54324" 00:16:33.277 }, 00:16:33.277 "auth": { 00:16:33.277 "state": "completed", 00:16:33.277 "digest": "sha256", 00:16:33.277 "dhgroup": "ffdhe4096" 00:16:33.277 } 00:16:33.277 } 00:16:33.277 ]' 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.277 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.536 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.536 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.536 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.536 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:33.536 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:34.103 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.103 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.103 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.103 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.103 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.104 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.104 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.104 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.362 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.621 00:16:34.621 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.621 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.621 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.880 { 00:16:34.880 "cntlid": 31, 00:16:34.880 "qid": 0, 00:16:34.880 "state": "enabled", 00:16:34.880 "thread": "nvmf_tgt_poll_group_000", 00:16:34.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.880 "listen_address": { 00:16:34.880 "trtype": "TCP", 00:16:34.880 "adrfam": "IPv4", 00:16:34.880 "traddr": "10.0.0.2", 00:16:34.880 "trsvcid": "4420" 00:16:34.880 }, 00:16:34.880 "peer_address": { 00:16:34.880 "trtype": "TCP", 00:16:34.880 "adrfam": "IPv4", 00:16:34.880 "traddr": "10.0.0.1", 00:16:34.880 "trsvcid": "54344" 00:16:34.880 }, 00:16:34.880 "auth": { 00:16:34.880 "state": "completed", 00:16:34.880 "digest": "sha256", 00:16:34.880 "dhgroup": "ffdhe4096" 00:16:34.880 } 00:16:34.880 } 00:16:34.880 ]' 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.880 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.138 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.138 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.138 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.138 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.138 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.138 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:35.138 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:35.702 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.959 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.960 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.524 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.524 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.524 { 00:16:36.524 "cntlid": 33, 00:16:36.524 "qid": 0, 00:16:36.524 "state": "enabled", 00:16:36.524 "thread": "nvmf_tgt_poll_group_000", 00:16:36.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.524 "listen_address": { 00:16:36.524 "trtype": "TCP", 00:16:36.524 "adrfam": "IPv4", 00:16:36.524 "traddr": "10.0.0.2", 00:16:36.524 "trsvcid": "4420" 00:16:36.524 }, 00:16:36.524 "peer_address": { 00:16:36.524 "trtype": "TCP", 00:16:36.524 "adrfam": "IPv4", 00:16:36.524 "traddr": "10.0.0.1", 00:16:36.524 "trsvcid": "54368" 00:16:36.524 }, 00:16:36.524 "auth": { 00:16:36.524 "state": "completed", 00:16:36.524 "digest": "sha256", 00:16:36.524 "dhgroup": "ffdhe6144" 00:16:36.524 } 00:16:36.524 } 00:16:36.525 ]' 00:16:36.525 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.782 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.782 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.782 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.782 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.782 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.782 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.782 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.040 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:37.041 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:37.609 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.609 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.609 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.609 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.609 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.609 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.609 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.609 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.867 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:37.867 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.867 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.867 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.867 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.867 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.868 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.868 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.868 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.868 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.868 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.868 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.868 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.126 00:16:38.126 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.126 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.126 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.384 { 00:16:38.384 "cntlid": 35, 00:16:38.384 "qid": 0, 00:16:38.384 "state": "enabled", 00:16:38.384 "thread": "nvmf_tgt_poll_group_000", 00:16:38.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.384 "listen_address": { 00:16:38.384 "trtype": "TCP", 00:16:38.384 "adrfam": "IPv4", 00:16:38.384 "traddr": "10.0.0.2", 00:16:38.384 "trsvcid": "4420" 00:16:38.384 }, 00:16:38.384 "peer_address": { 00:16:38.384 "trtype": "TCP", 00:16:38.384 "adrfam": "IPv4", 00:16:38.384 "traddr": "10.0.0.1", 00:16:38.384 "trsvcid": "41330" 00:16:38.384 }, 00:16:38.384 "auth": { 00:16:38.384 "state": "completed", 00:16:38.384 "digest": "sha256", 00:16:38.384 "dhgroup": "ffdhe6144" 00:16:38.384 } 00:16:38.384 } 00:16:38.384 ]' 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.384 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:38.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:39.210 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.210 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.210 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.210 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.210 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.210 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.210 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.210 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.468 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.726 00:16:39.726 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.726 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.726 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.985 { 00:16:39.985 "cntlid": 37, 00:16:39.985 "qid": 0, 00:16:39.985 "state": "enabled", 00:16:39.985 "thread": "nvmf_tgt_poll_group_000", 00:16:39.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.985 "listen_address": { 00:16:39.985 "trtype": "TCP", 00:16:39.985 "adrfam": "IPv4", 00:16:39.985 "traddr": "10.0.0.2", 00:16:39.985 "trsvcid": "4420" 00:16:39.985 }, 00:16:39.985 "peer_address": { 00:16:39.985 "trtype": "TCP", 00:16:39.985 "adrfam": "IPv4", 00:16:39.985 "traddr": "10.0.0.1", 00:16:39.985 "trsvcid": "41356" 00:16:39.985 }, 00:16:39.985 "auth": { 00:16:39.985 "state": "completed", 00:16:39.985 "digest": "sha256", 00:16:39.985 "dhgroup": "ffdhe6144" 00:16:39.985 } 00:16:39.985 } 00:16:39.985 ]' 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.985 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.243 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.243 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.243 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.243 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:40.243 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:40.809 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.809 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.809 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.809 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.809 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.809 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.809 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:40.810 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.068 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.636 00:16:41.636 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.636 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.636 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.636 { 00:16:41.636 "cntlid": 39, 00:16:41.636 "qid": 0, 00:16:41.636 "state": "enabled", 00:16:41.636 "thread": "nvmf_tgt_poll_group_000", 00:16:41.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.636 "listen_address": { 00:16:41.636 "trtype": "TCP", 00:16:41.636 "adrfam": "IPv4", 00:16:41.636 "traddr": "10.0.0.2", 00:16:41.636 "trsvcid": "4420" 00:16:41.636 }, 00:16:41.636 "peer_address": { 00:16:41.636 "trtype": "TCP", 00:16:41.636 "adrfam": "IPv4", 00:16:41.636 "traddr": "10.0.0.1", 00:16:41.636 "trsvcid": "41386" 00:16:41.636 }, 00:16:41.636 "auth": { 00:16:41.636 "state": "completed", 00:16:41.636 "digest": "sha256", 00:16:41.636 "dhgroup": "ffdhe6144" 00:16:41.636 } 00:16:41.636 } 00:16:41.636 ]' 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.636 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.895 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.895 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.895 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.895 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.895 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.895 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:41.895 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:42.460 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.460 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.460 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.460 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.718 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.718 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.718 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.718 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.718 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.718 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.285 00:16:43.285 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.285 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.285 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.544 { 00:16:43.544 "cntlid": 41, 00:16:43.544 "qid": 0, 00:16:43.544 "state": "enabled", 00:16:43.544 "thread": "nvmf_tgt_poll_group_000", 00:16:43.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.544 "listen_address": { 00:16:43.544 "trtype": "TCP", 00:16:43.544 "adrfam": "IPv4", 00:16:43.544 "traddr": "10.0.0.2", 00:16:43.544 "trsvcid": "4420" 00:16:43.544 }, 00:16:43.544 "peer_address": { 00:16:43.544 "trtype": "TCP", 00:16:43.544 "adrfam": "IPv4", 00:16:43.544 "traddr": "10.0.0.1", 00:16:43.544 "trsvcid": "41406" 00:16:43.544 }, 00:16:43.544 "auth": { 00:16:43.544 "state": "completed", 00:16:43.544 "digest": "sha256", 00:16:43.544 "dhgroup": "ffdhe8192" 00:16:43.544 } 00:16:43.544 } 00:16:43.544 ]' 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.544 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.803 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:43.803 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:44.371 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.371 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.371 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.371 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.371 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.371 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.371 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.371 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.630 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.196 00:16:45.196 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.196 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.197 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.197 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.197 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.197 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.197 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.455 { 00:16:45.455 "cntlid": 43, 00:16:45.455 "qid": 0, 00:16:45.455 "state": "enabled", 00:16:45.455 "thread": "nvmf_tgt_poll_group_000", 00:16:45.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.455 "listen_address": { 00:16:45.455 "trtype": "TCP", 00:16:45.455 "adrfam": "IPv4", 00:16:45.455 "traddr": "10.0.0.2", 00:16:45.455 "trsvcid": "4420" 00:16:45.455 }, 00:16:45.455 "peer_address": { 00:16:45.455 "trtype": "TCP", 00:16:45.455 "adrfam": "IPv4", 00:16:45.455 "traddr": "10.0.0.1", 00:16:45.455 "trsvcid": "41428" 00:16:45.455 }, 00:16:45.455 "auth": { 00:16:45.455 "state": "completed", 00:16:45.455 "digest": "sha256", 00:16:45.455 "dhgroup": "ffdhe8192" 00:16:45.455 } 00:16:45.455 } 00:16:45.455 ]' 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.455 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.714 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:45.714 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:46.282 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.282 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.282 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.282 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.282 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.282 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.282 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.282 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.540 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.107 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.107 { 00:16:47.107 "cntlid": 45, 00:16:47.107 "qid": 0, 00:16:47.107 "state": "enabled", 00:16:47.107 "thread": "nvmf_tgt_poll_group_000", 00:16:47.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.107 "listen_address": { 00:16:47.107 "trtype": "TCP", 00:16:47.107 "adrfam": "IPv4", 00:16:47.107 "traddr": "10.0.0.2", 00:16:47.107 "trsvcid": "4420" 00:16:47.107 }, 00:16:47.107 "peer_address": { 00:16:47.107 "trtype": "TCP", 00:16:47.107 "adrfam": "IPv4", 00:16:47.107 "traddr": "10.0.0.1", 00:16:47.107 "trsvcid": "51510" 00:16:47.107 }, 00:16:47.107 "auth": { 00:16:47.107 "state": "completed", 00:16:47.107 "digest": "sha256", 00:16:47.107 "dhgroup": "ffdhe8192" 00:16:47.107 } 00:16:47.107 } 00:16:47.107 ]' 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.107 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.365 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.365 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.365 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.365 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.365 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.627 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:47.627 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:47.962 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.272 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.272 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.272 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.272 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.272 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.272 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.273 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.851 00:16:48.851 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.851 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.851 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.109 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.109 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.109 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.109 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.109 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.109 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.109 { 00:16:49.110 "cntlid": 47, 00:16:49.110 "qid": 0, 00:16:49.110 "state": "enabled", 00:16:49.110 "thread": "nvmf_tgt_poll_group_000", 00:16:49.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.110 "listen_address": { 00:16:49.110 "trtype": "TCP", 00:16:49.110 "adrfam": "IPv4", 00:16:49.110 "traddr": "10.0.0.2", 00:16:49.110 "trsvcid": "4420" 00:16:49.110 }, 00:16:49.110 "peer_address": { 00:16:49.110 "trtype": "TCP", 00:16:49.110 "adrfam": "IPv4", 00:16:49.110 "traddr": "10.0.0.1", 00:16:49.110 "trsvcid": "51546" 00:16:49.110 }, 00:16:49.110 "auth": { 00:16:49.110 "state": "completed", 00:16:49.110 "digest": "sha256", 00:16:49.110 "dhgroup": "ffdhe8192" 00:16:49.110 } 00:16:49.110 } 00:16:49.110 ]' 00:16:49.110 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.110 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.110 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.110 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.110 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.110 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.110 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.110 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.369 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:49.369 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.934 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.935 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.193 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.451 00:16:50.451 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.451 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.451 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.709 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.709 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.709 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.709 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.709 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.709 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.709 { 00:16:50.709 "cntlid": 49, 00:16:50.709 "qid": 0, 00:16:50.709 "state": "enabled", 00:16:50.709 "thread": "nvmf_tgt_poll_group_000", 00:16:50.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.709 "listen_address": { 00:16:50.709 "trtype": "TCP", 00:16:50.709 "adrfam": "IPv4", 00:16:50.709 "traddr": "10.0.0.2", 00:16:50.709 "trsvcid": "4420" 00:16:50.709 }, 00:16:50.709 "peer_address": { 00:16:50.709 "trtype": "TCP", 00:16:50.709 "adrfam": "IPv4", 00:16:50.709 "traddr": "10.0.0.1", 00:16:50.709 "trsvcid": "51584" 00:16:50.709 }, 00:16:50.709 "auth": { 00:16:50.709 "state": "completed", 00:16:50.709 "digest": "sha384", 00:16:50.709 "dhgroup": "null" 00:16:50.709 } 00:16:50.709 } 00:16:50.709 ]' 00:16:50.709 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.709 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.709 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.709 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:50.709 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.709 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.709 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.709 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.967 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:50.967 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:51.534 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.534 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.534 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.534 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.534 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.534 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.534 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.534 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.793 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.051 00:16:52.051 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.051 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.051 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.310 { 00:16:52.310 "cntlid": 51, 00:16:52.310 "qid": 0, 00:16:52.310 "state": "enabled", 00:16:52.310 "thread": "nvmf_tgt_poll_group_000", 00:16:52.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.310 "listen_address": { 00:16:52.310 "trtype": "TCP", 00:16:52.310 "adrfam": "IPv4", 00:16:52.310 "traddr": "10.0.0.2", 00:16:52.310 "trsvcid": "4420" 00:16:52.310 }, 00:16:52.310 "peer_address": { 00:16:52.310 "trtype": "TCP", 00:16:52.310 "adrfam": "IPv4", 00:16:52.310 "traddr": "10.0.0.1", 00:16:52.310 "trsvcid": "51616" 00:16:52.310 }, 00:16:52.310 "auth": { 00:16:52.310 "state": "completed", 00:16:52.310 "digest": "sha384", 00:16:52.310 "dhgroup": "null" 00:16:52.310 } 00:16:52.310 } 00:16:52.310 ]' 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.310 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.568 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:52.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:53.135 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.135 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.135 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.135 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.135 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.135 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.135 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.135 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.394 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.653 00:16:53.653 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.653 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.653 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.911 { 00:16:53.911 "cntlid": 53, 00:16:53.911 "qid": 0, 00:16:53.911 "state": "enabled", 00:16:53.911 "thread": "nvmf_tgt_poll_group_000", 00:16:53.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.911 "listen_address": { 00:16:53.911 "trtype": "TCP", 00:16:53.911 "adrfam": "IPv4", 00:16:53.911 "traddr": "10.0.0.2", 00:16:53.911 "trsvcid": "4420" 00:16:53.911 }, 00:16:53.911 "peer_address": { 00:16:53.911 "trtype": "TCP", 00:16:53.911 "adrfam": "IPv4", 00:16:53.911 "traddr": "10.0.0.1", 00:16:53.911 "trsvcid": "51644" 00:16:53.911 }, 00:16:53.911 "auth": { 00:16:53.911 "state": "completed", 00:16:53.911 "digest": "sha384", 00:16:53.911 "dhgroup": "null" 00:16:53.911 } 00:16:53.911 } 00:16:53.911 ]' 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.911 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.169 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:54.169 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:16:54.736 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.736 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.736 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.736 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.736 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.736 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.736 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.736 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.994 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.253 00:16:55.253 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.253 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.253 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.511 { 00:16:55.511 "cntlid": 55, 00:16:55.511 "qid": 0, 00:16:55.511 "state": "enabled", 00:16:55.511 "thread": "nvmf_tgt_poll_group_000", 00:16:55.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.511 "listen_address": { 00:16:55.511 "trtype": "TCP", 00:16:55.511 "adrfam": "IPv4", 00:16:55.511 "traddr": "10.0.0.2", 00:16:55.511 "trsvcid": "4420" 00:16:55.511 }, 00:16:55.511 "peer_address": { 00:16:55.511 "trtype": "TCP", 00:16:55.511 "adrfam": "IPv4", 00:16:55.511 "traddr": "10.0.0.1", 00:16:55.511 "trsvcid": "51660" 00:16:55.511 }, 00:16:55.511 "auth": { 00:16:55.511 "state": "completed", 00:16:55.511 "digest": "sha384", 00:16:55.511 "dhgroup": "null" 00:16:55.511 } 00:16:55.511 } 00:16:55.511 ]' 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.511 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.770 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:55.770 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.337 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.596 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:56.596 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.596 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.596 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:56.596 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.596 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.596 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.596 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.597 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.597 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.597 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.597 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.597 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.855 00:16:56.855 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.855 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.855 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.113 { 00:16:57.113 "cntlid": 57, 00:16:57.113 "qid": 0, 00:16:57.113 "state": "enabled", 00:16:57.113 "thread": "nvmf_tgt_poll_group_000", 00:16:57.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.113 "listen_address": { 00:16:57.113 "trtype": "TCP", 00:16:57.113 "adrfam": "IPv4", 00:16:57.113 "traddr": "10.0.0.2", 00:16:57.113 "trsvcid": "4420" 00:16:57.113 }, 00:16:57.113 "peer_address": { 00:16:57.113 "trtype": "TCP", 00:16:57.113 "adrfam": "IPv4", 00:16:57.113 "traddr": "10.0.0.1", 00:16:57.113 "trsvcid": "56358" 00:16:57.113 }, 00:16:57.113 "auth": { 00:16:57.113 "state": "completed", 00:16:57.113 "digest": "sha384", 00:16:57.113 "dhgroup": "ffdhe2048" 00:16:57.113 } 00:16:57.113 } 00:16:57.113 ]' 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.113 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.372 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:57.372 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:16:57.938 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.938 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.938 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.938 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.938 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.938 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.938 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.938 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.196 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.455 00:16:58.455 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.455 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.455 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.713 { 00:16:58.713 "cntlid": 59, 00:16:58.713 "qid": 0, 00:16:58.713 "state": "enabled", 00:16:58.713 "thread": "nvmf_tgt_poll_group_000", 00:16:58.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.713 "listen_address": { 00:16:58.713 "trtype": "TCP", 00:16:58.713 "adrfam": "IPv4", 00:16:58.713 "traddr": "10.0.0.2", 00:16:58.713 "trsvcid": "4420" 00:16:58.713 }, 00:16:58.713 "peer_address": { 00:16:58.713 "trtype": "TCP", 00:16:58.713 "adrfam": "IPv4", 00:16:58.713 "traddr": "10.0.0.1", 00:16:58.713 "trsvcid": "56400" 00:16:58.713 }, 00:16:58.713 "auth": { 00:16:58.713 "state": "completed", 00:16:58.713 "digest": "sha384", 00:16:58.713 "dhgroup": "ffdhe2048" 00:16:58.713 } 00:16:58.713 } 00:16:58.713 ]' 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.713 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.713 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.713 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.713 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.713 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.713 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.973 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:58.973 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:16:59.540 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.540 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.540 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.540 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.540 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.540 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.540 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.540 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.798 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.799 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.057 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.057 { 00:17:00.057 "cntlid": 61, 00:17:00.057 "qid": 0, 00:17:00.057 "state": "enabled", 00:17:00.057 "thread": "nvmf_tgt_poll_group_000", 00:17:00.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.057 "listen_address": { 00:17:00.057 "trtype": "TCP", 00:17:00.057 "adrfam": "IPv4", 00:17:00.057 "traddr": "10.0.0.2", 00:17:00.057 "trsvcid": "4420" 00:17:00.057 }, 00:17:00.057 "peer_address": { 00:17:00.057 "trtype": "TCP", 00:17:00.057 "adrfam": "IPv4", 00:17:00.057 "traddr": "10.0.0.1", 00:17:00.057 "trsvcid": "56424" 00:17:00.057 }, 00:17:00.057 "auth": { 00:17:00.057 "state": "completed", 00:17:00.057 "digest": "sha384", 00:17:00.057 "dhgroup": "ffdhe2048" 00:17:00.057 } 00:17:00.057 } 00:17:00.057 ]' 00:17:00.057 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.315 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.315 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.315 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.315 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.315 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.315 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.315 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.573 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:00.573 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:01.140 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.140 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.140 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.140 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.140 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.140 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.140 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.140 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.399 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.657 00:17:01.657 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.657 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.657 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.657 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.657 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.657 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.657 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.916 { 00:17:01.916 "cntlid": 63, 00:17:01.916 "qid": 0, 00:17:01.916 "state": "enabled", 00:17:01.916 "thread": "nvmf_tgt_poll_group_000", 00:17:01.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.916 "listen_address": { 00:17:01.916 "trtype": "TCP", 00:17:01.916 "adrfam": "IPv4", 00:17:01.916 "traddr": "10.0.0.2", 00:17:01.916 "trsvcid": "4420" 00:17:01.916 }, 00:17:01.916 "peer_address": { 00:17:01.916 "trtype": "TCP", 00:17:01.916 "adrfam": "IPv4", 00:17:01.916 "traddr": "10.0.0.1", 00:17:01.916 "trsvcid": "56442" 00:17:01.916 }, 00:17:01.916 "auth": { 00:17:01.916 "state": "completed", 00:17:01.916 "digest": "sha384", 00:17:01.916 "dhgroup": "ffdhe2048" 00:17:01.916 } 00:17:01.916 } 00:17:01.916 ]' 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.916 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.175 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:02.175 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:02.742 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.742 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.742 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.742 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.742 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.742 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.742 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.742 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.742 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.000 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.259 00:17:03.259 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.259 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.259 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.259 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.259 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.259 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.259 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.517 { 00:17:03.517 "cntlid": 65, 00:17:03.517 "qid": 0, 00:17:03.517 "state": "enabled", 00:17:03.517 "thread": "nvmf_tgt_poll_group_000", 00:17:03.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.517 "listen_address": { 00:17:03.517 "trtype": "TCP", 00:17:03.517 "adrfam": "IPv4", 00:17:03.517 "traddr": "10.0.0.2", 00:17:03.517 "trsvcid": "4420" 00:17:03.517 }, 00:17:03.517 "peer_address": { 00:17:03.517 "trtype": "TCP", 00:17:03.517 "adrfam": "IPv4", 00:17:03.517 "traddr": "10.0.0.1", 00:17:03.517 "trsvcid": "56464" 00:17:03.517 }, 00:17:03.517 "auth": { 00:17:03.517 "state": "completed", 00:17:03.517 "digest": "sha384", 00:17:03.517 "dhgroup": "ffdhe3072" 00:17:03.517 } 00:17:03.517 } 00:17:03.517 ]' 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.517 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.776 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:03.776 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:04.342 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.343 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.343 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.343 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.343 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.343 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.343 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.343 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.601 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.860 00:17:04.860 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.860 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.860 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.860 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.860 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.860 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.860 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.118 { 00:17:05.118 "cntlid": 67, 00:17:05.118 "qid": 0, 00:17:05.118 "state": "enabled", 00:17:05.118 "thread": "nvmf_tgt_poll_group_000", 00:17:05.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.118 "listen_address": { 00:17:05.118 "trtype": "TCP", 00:17:05.118 "adrfam": "IPv4", 00:17:05.118 "traddr": "10.0.0.2", 00:17:05.118 "trsvcid": "4420" 00:17:05.118 }, 00:17:05.118 "peer_address": { 00:17:05.118 "trtype": "TCP", 00:17:05.118 "adrfam": "IPv4", 00:17:05.118 "traddr": "10.0.0.1", 00:17:05.118 "trsvcid": "56488" 00:17:05.118 }, 00:17:05.118 "auth": { 00:17:05.118 "state": "completed", 00:17:05.118 "digest": "sha384", 00:17:05.118 "dhgroup": "ffdhe3072" 00:17:05.118 } 00:17:05.118 } 00:17:05.118 ]' 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.118 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.377 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:05.377 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:05.946 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.946 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.946 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.946 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.946 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.946 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.946 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.946 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.205 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.206 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.464 00:17:06.464 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.464 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.464 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.722 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.722 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.722 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.722 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.722 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.722 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.722 { 00:17:06.722 "cntlid": 69, 00:17:06.722 "qid": 0, 00:17:06.722 "state": "enabled", 00:17:06.722 "thread": "nvmf_tgt_poll_group_000", 00:17:06.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.722 "listen_address": { 00:17:06.722 "trtype": "TCP", 00:17:06.723 "adrfam": "IPv4", 00:17:06.723 "traddr": "10.0.0.2", 00:17:06.723 "trsvcid": "4420" 00:17:06.723 }, 00:17:06.723 "peer_address": { 00:17:06.723 "trtype": "TCP", 00:17:06.723 "adrfam": "IPv4", 00:17:06.723 "traddr": "10.0.0.1", 00:17:06.723 "trsvcid": "56508" 00:17:06.723 }, 00:17:06.723 "auth": { 00:17:06.723 "state": "completed", 00:17:06.723 "digest": "sha384", 00:17:06.723 "dhgroup": "ffdhe3072" 00:17:06.723 } 00:17:06.723 } 00:17:06.723 ]' 00:17:06.723 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.723 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.723 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.723 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.723 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.723 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.723 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.723 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.981 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:06.981 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:07.548 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.548 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.548 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.548 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.548 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.548 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.548 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.548 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.806 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.807 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.807 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.807 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.807 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.065 00:17:08.065 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.065 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.065 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.324 { 00:17:08.324 "cntlid": 71, 00:17:08.324 "qid": 0, 00:17:08.324 "state": "enabled", 00:17:08.324 "thread": "nvmf_tgt_poll_group_000", 00:17:08.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.324 "listen_address": { 00:17:08.324 "trtype": "TCP", 00:17:08.324 "adrfam": "IPv4", 00:17:08.324 "traddr": "10.0.0.2", 00:17:08.324 "trsvcid": "4420" 00:17:08.324 }, 00:17:08.324 "peer_address": { 00:17:08.324 "trtype": "TCP", 00:17:08.324 "adrfam": "IPv4", 00:17:08.324 "traddr": "10.0.0.1", 00:17:08.324 "trsvcid": "55068" 00:17:08.324 }, 00:17:08.324 "auth": { 00:17:08.324 "state": "completed", 00:17:08.324 "digest": "sha384", 00:17:08.324 "dhgroup": "ffdhe3072" 00:17:08.324 } 00:17:08.324 } 00:17:08.324 ]' 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.324 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.582 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:08.582 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.150 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.409 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.667 00:17:09.667 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.667 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.667 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.926 { 00:17:09.926 "cntlid": 73, 00:17:09.926 "qid": 0, 00:17:09.926 "state": "enabled", 00:17:09.926 "thread": "nvmf_tgt_poll_group_000", 00:17:09.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.926 "listen_address": { 00:17:09.926 "trtype": "TCP", 00:17:09.926 "adrfam": "IPv4", 00:17:09.926 "traddr": "10.0.0.2", 00:17:09.926 "trsvcid": "4420" 00:17:09.926 }, 00:17:09.926 "peer_address": { 00:17:09.926 "trtype": "TCP", 00:17:09.926 "adrfam": "IPv4", 00:17:09.926 "traddr": "10.0.0.1", 00:17:09.926 "trsvcid": "55090" 00:17:09.926 }, 00:17:09.926 "auth": { 00:17:09.926 "state": "completed", 00:17:09.926 "digest": "sha384", 00:17:09.926 "dhgroup": "ffdhe4096" 00:17:09.926 } 00:17:09.926 } 00:17:09.926 ]' 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.926 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.184 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:10.184 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:10.750 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.750 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.750 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.750 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.750 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.750 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.750 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.750 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.011 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.269 00:17:11.269 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.269 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.269 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.528 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.528 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.528 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.528 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.528 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.528 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.528 { 00:17:11.528 "cntlid": 75, 00:17:11.528 "qid": 0, 00:17:11.528 "state": "enabled", 00:17:11.528 "thread": "nvmf_tgt_poll_group_000", 00:17:11.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.528 "listen_address": { 00:17:11.528 "trtype": "TCP", 00:17:11.528 "adrfam": "IPv4", 00:17:11.528 "traddr": "10.0.0.2", 00:17:11.528 "trsvcid": "4420" 00:17:11.528 }, 00:17:11.528 "peer_address": { 00:17:11.528 "trtype": "TCP", 00:17:11.528 "adrfam": "IPv4", 00:17:11.528 "traddr": "10.0.0.1", 00:17:11.528 "trsvcid": "55116" 00:17:11.528 }, 00:17:11.528 "auth": { 00:17:11.528 "state": "completed", 00:17:11.528 "digest": "sha384", 00:17:11.529 "dhgroup": "ffdhe4096" 00:17:11.529 } 00:17:11.529 } 00:17:11.529 ]' 00:17:11.529 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.529 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.529 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.529 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.529 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.529 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.529 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.529 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.787 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:11.787 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:12.353 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.353 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.353 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.353 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.353 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.353 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.353 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.353 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.612 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:12.612 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.613 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.870 00:17:12.870 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.870 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.870 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.128 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.128 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.128 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.128 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.128 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.128 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.128 { 00:17:13.128 "cntlid": 77, 00:17:13.128 "qid": 0, 00:17:13.128 "state": "enabled", 00:17:13.128 "thread": "nvmf_tgt_poll_group_000", 00:17:13.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:13.128 "listen_address": { 00:17:13.128 "trtype": "TCP", 00:17:13.128 "adrfam": "IPv4", 00:17:13.128 "traddr": "10.0.0.2", 00:17:13.128 "trsvcid": "4420" 00:17:13.128 }, 00:17:13.128 "peer_address": { 00:17:13.128 "trtype": "TCP", 00:17:13.128 "adrfam": "IPv4", 00:17:13.128 "traddr": "10.0.0.1", 00:17:13.128 "trsvcid": "55136" 00:17:13.128 }, 00:17:13.128 "auth": { 00:17:13.128 "state": "completed", 00:17:13.128 "digest": "sha384", 00:17:13.128 "dhgroup": "ffdhe4096" 00:17:13.128 } 00:17:13.128 } 00:17:13.128 ]' 00:17:13.128 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.128 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.129 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.129 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.129 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.129 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.129 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.129 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.387 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:13.387 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:13.952 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.952 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.952 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.952 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.952 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.952 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.952 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.952 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.210 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:14.210 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.210 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.210 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.211 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.469 00:17:14.469 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.469 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.469 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.726 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.726 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.726 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.727 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.727 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.727 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.727 { 00:17:14.727 "cntlid": 79, 00:17:14.727 "qid": 0, 00:17:14.727 "state": "enabled", 00:17:14.727 "thread": "nvmf_tgt_poll_group_000", 00:17:14.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:14.727 "listen_address": { 00:17:14.727 "trtype": "TCP", 00:17:14.727 "adrfam": "IPv4", 00:17:14.727 "traddr": "10.0.0.2", 00:17:14.727 "trsvcid": "4420" 00:17:14.727 }, 00:17:14.727 "peer_address": { 00:17:14.727 "trtype": "TCP", 00:17:14.727 "adrfam": "IPv4", 00:17:14.727 "traddr": "10.0.0.1", 00:17:14.727 "trsvcid": "55146" 00:17:14.727 }, 00:17:14.727 "auth": { 00:17:14.727 "state": "completed", 00:17:14.727 "digest": "sha384", 00:17:14.727 "dhgroup": "ffdhe4096" 00:17:14.727 } 00:17:14.727 } 00:17:14.727 ]' 00:17:14.727 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.727 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.727 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.727 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.727 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.985 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.985 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.985 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.985 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:14.985 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.552 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.812 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.379 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.379 { 00:17:16.379 "cntlid": 81, 00:17:16.379 "qid": 0, 00:17:16.379 "state": "enabled", 00:17:16.379 "thread": "nvmf_tgt_poll_group_000", 00:17:16.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.379 "listen_address": { 00:17:16.379 "trtype": "TCP", 00:17:16.379 "adrfam": "IPv4", 00:17:16.379 "traddr": "10.0.0.2", 00:17:16.379 "trsvcid": "4420" 00:17:16.379 }, 00:17:16.379 "peer_address": { 00:17:16.379 "trtype": "TCP", 00:17:16.379 "adrfam": "IPv4", 00:17:16.379 "traddr": "10.0.0.1", 00:17:16.379 "trsvcid": "55170" 00:17:16.379 }, 00:17:16.379 "auth": { 00:17:16.379 "state": "completed", 00:17:16.379 "digest": "sha384", 00:17:16.379 "dhgroup": "ffdhe6144" 00:17:16.379 } 00:17:16.379 } 00:17:16.379 ]' 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.379 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.638 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.638 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.638 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.638 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.638 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.896 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:16.896 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:17.463 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.463 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.463 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.463 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.463 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.463 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.463 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.464 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.030 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.030 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.030 { 00:17:18.030 "cntlid": 83, 00:17:18.030 "qid": 0, 00:17:18.030 "state": "enabled", 00:17:18.030 "thread": "nvmf_tgt_poll_group_000", 00:17:18.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:18.030 "listen_address": { 00:17:18.030 "trtype": "TCP", 00:17:18.030 "adrfam": "IPv4", 00:17:18.030 "traddr": "10.0.0.2", 00:17:18.030 "trsvcid": "4420" 00:17:18.030 }, 00:17:18.030 "peer_address": { 00:17:18.030 "trtype": "TCP", 00:17:18.030 "adrfam": "IPv4", 00:17:18.030 "traddr": "10.0.0.1", 00:17:18.030 "trsvcid": "45798" 00:17:18.030 }, 00:17:18.030 "auth": { 00:17:18.030 "state": "completed", 00:17:18.030 "digest": "sha384", 00:17:18.030 "dhgroup": "ffdhe6144" 00:17:18.030 } 00:17:18.030 } 00:17:18.030 ]' 00:17:18.031 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.288 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.288 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.288 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.288 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.289 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.289 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.289 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.546 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:18.547 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:19.113 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.113 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.113 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.113 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.113 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.113 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.113 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.113 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.371 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:19.371 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.371 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.371 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.371 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.371 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.372 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.372 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.372 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.372 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.372 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.372 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.372 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.630 00:17:19.630 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.630 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.630 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.888 { 00:17:19.888 "cntlid": 85, 00:17:19.888 "qid": 0, 00:17:19.888 "state": "enabled", 00:17:19.888 "thread": "nvmf_tgt_poll_group_000", 00:17:19.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.888 "listen_address": { 00:17:19.888 "trtype": "TCP", 00:17:19.888 "adrfam": "IPv4", 00:17:19.888 "traddr": "10.0.0.2", 00:17:19.888 "trsvcid": "4420" 00:17:19.888 }, 00:17:19.888 "peer_address": { 00:17:19.888 "trtype": "TCP", 00:17:19.888 "adrfam": "IPv4", 00:17:19.888 "traddr": "10.0.0.1", 00:17:19.888 "trsvcid": "45834" 00:17:19.888 }, 00:17:19.888 "auth": { 00:17:19.888 "state": "completed", 00:17:19.888 "digest": "sha384", 00:17:19.888 "dhgroup": "ffdhe6144" 00:17:19.888 } 00:17:19.888 } 00:17:19.888 ]' 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.888 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.146 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:20.146 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:20.713 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.713 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.713 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.713 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.713 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.713 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.713 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.713 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.971 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.229 00:17:21.229 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.229 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.229 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.487 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.487 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.487 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.487 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.488 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.488 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.488 { 00:17:21.488 "cntlid": 87, 00:17:21.488 "qid": 0, 00:17:21.488 "state": "enabled", 00:17:21.488 "thread": "nvmf_tgt_poll_group_000", 00:17:21.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.488 "listen_address": { 00:17:21.488 "trtype": "TCP", 00:17:21.488 "adrfam": "IPv4", 00:17:21.488 "traddr": "10.0.0.2", 00:17:21.488 "trsvcid": "4420" 00:17:21.488 }, 00:17:21.488 "peer_address": { 00:17:21.488 "trtype": "TCP", 00:17:21.488 "adrfam": "IPv4", 00:17:21.488 "traddr": "10.0.0.1", 00:17:21.488 "trsvcid": "45856" 00:17:21.488 }, 00:17:21.488 "auth": { 00:17:21.488 "state": "completed", 00:17:21.488 "digest": "sha384", 00:17:21.488 "dhgroup": "ffdhe6144" 00:17:21.488 } 00:17:21.488 } 00:17:21.488 ]' 00:17:21.488 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.488 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.488 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.746 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.746 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.746 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.746 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.746 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.746 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:21.746 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:22.312 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.570 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.570 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.570 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.570 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.570 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.571 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.138 00:17:23.138 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.138 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.138 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.396 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.396 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.396 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.396 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.396 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.396 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.396 { 00:17:23.396 "cntlid": 89, 00:17:23.396 "qid": 0, 00:17:23.396 "state": "enabled", 00:17:23.396 "thread": "nvmf_tgt_poll_group_000", 00:17:23.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.396 "listen_address": { 00:17:23.396 "trtype": "TCP", 00:17:23.396 "adrfam": "IPv4", 00:17:23.396 "traddr": "10.0.0.2", 00:17:23.396 "trsvcid": "4420" 00:17:23.396 }, 00:17:23.396 "peer_address": { 00:17:23.396 "trtype": "TCP", 00:17:23.396 "adrfam": "IPv4", 00:17:23.396 "traddr": "10.0.0.1", 00:17:23.396 "trsvcid": "45870" 00:17:23.396 }, 00:17:23.396 "auth": { 00:17:23.396 "state": "completed", 00:17:23.396 "digest": "sha384", 00:17:23.396 "dhgroup": "ffdhe8192" 00:17:23.396 } 00:17:23.396 } 00:17:23.397 ]' 00:17:23.397 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.397 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.397 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.397 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.397 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.397 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.397 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.397 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.655 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:23.655 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:24.221 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.221 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.221 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.221 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.221 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.221 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.221 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.221 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.479 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.049 00:17:25.049 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.049 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.049 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.342 { 00:17:25.342 "cntlid": 91, 00:17:25.342 "qid": 0, 00:17:25.342 "state": "enabled", 00:17:25.342 "thread": "nvmf_tgt_poll_group_000", 00:17:25.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.342 "listen_address": { 00:17:25.342 "trtype": "TCP", 00:17:25.342 "adrfam": "IPv4", 00:17:25.342 "traddr": "10.0.0.2", 00:17:25.342 "trsvcid": "4420" 00:17:25.342 }, 00:17:25.342 "peer_address": { 00:17:25.342 "trtype": "TCP", 00:17:25.342 "adrfam": "IPv4", 00:17:25.342 "traddr": "10.0.0.1", 00:17:25.342 "trsvcid": "45896" 00:17:25.342 }, 00:17:25.342 "auth": { 00:17:25.342 "state": "completed", 00:17:25.342 "digest": "sha384", 00:17:25.342 "dhgroup": "ffdhe8192" 00:17:25.342 } 00:17:25.342 } 00:17:25.342 ]' 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.342 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.659 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:25.659 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.243 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.502 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.502 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.502 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.761 00:17:26.761 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.761 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.761 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.019 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.019 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.019 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.019 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.019 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.019 { 00:17:27.019 "cntlid": 93, 00:17:27.019 "qid": 0, 00:17:27.019 "state": "enabled", 00:17:27.019 "thread": "nvmf_tgt_poll_group_000", 00:17:27.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:27.019 "listen_address": { 00:17:27.019 "trtype": "TCP", 00:17:27.019 "adrfam": "IPv4", 00:17:27.019 "traddr": "10.0.0.2", 00:17:27.019 "trsvcid": "4420" 00:17:27.019 }, 00:17:27.019 "peer_address": { 00:17:27.019 "trtype": "TCP", 00:17:27.019 "adrfam": "IPv4", 00:17:27.019 "traddr": "10.0.0.1", 00:17:27.019 "trsvcid": "42178" 00:17:27.019 }, 00:17:27.019 "auth": { 00:17:27.019 "state": "completed", 00:17:27.019 "digest": "sha384", 00:17:27.019 "dhgroup": "ffdhe8192" 00:17:27.019 } 00:17:27.019 } 00:17:27.019 ]' 00:17:27.020 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.020 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.020 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.278 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.278 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.278 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.278 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.278 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.536 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:27.536 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.105 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.673 00:17:28.673 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.673 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.673 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.931 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.931 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.931 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.931 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.931 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.931 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.931 { 00:17:28.931 "cntlid": 95, 00:17:28.931 "qid": 0, 00:17:28.931 "state": "enabled", 00:17:28.931 "thread": "nvmf_tgt_poll_group_000", 00:17:28.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:28.931 "listen_address": { 00:17:28.931 "trtype": "TCP", 00:17:28.931 "adrfam": "IPv4", 00:17:28.931 "traddr": "10.0.0.2", 00:17:28.931 "trsvcid": "4420" 00:17:28.931 }, 00:17:28.931 "peer_address": { 00:17:28.931 "trtype": "TCP", 00:17:28.931 "adrfam": "IPv4", 00:17:28.931 "traddr": "10.0.0.1", 00:17:28.931 "trsvcid": "42200" 00:17:28.931 }, 00:17:28.931 "auth": { 00:17:28.931 "state": "completed", 00:17:28.931 "digest": "sha384", 00:17:28.931 "dhgroup": "ffdhe8192" 00:17:28.931 } 00:17:28.931 } 00:17:28.931 ]' 00:17:28.931 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.932 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.932 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.932 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.932 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.932 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.932 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.932 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.190 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:29.190 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:29.756 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.022 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.281 00:17:30.281 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.281 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.281 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.540 { 00:17:30.540 "cntlid": 97, 00:17:30.540 "qid": 0, 00:17:30.540 "state": "enabled", 00:17:30.540 "thread": "nvmf_tgt_poll_group_000", 00:17:30.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.540 "listen_address": { 00:17:30.540 "trtype": "TCP", 00:17:30.540 "adrfam": "IPv4", 00:17:30.540 "traddr": "10.0.0.2", 00:17:30.540 "trsvcid": "4420" 00:17:30.540 }, 00:17:30.540 "peer_address": { 00:17:30.540 "trtype": "TCP", 00:17:30.540 "adrfam": "IPv4", 00:17:30.540 "traddr": "10.0.0.1", 00:17:30.540 "trsvcid": "42208" 00:17:30.540 }, 00:17:30.540 "auth": { 00:17:30.540 "state": "completed", 00:17:30.540 "digest": "sha512", 00:17:30.540 "dhgroup": "null" 00:17:30.540 } 00:17:30.540 } 00:17:30.540 ]' 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.540 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.798 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:30.798 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:31.366 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.366 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.366 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.366 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.366 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.366 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.366 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.366 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.623 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:31.623 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.623 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.624 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.881 00:17:31.881 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.881 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.881 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.160 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.160 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.160 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.160 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.160 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.160 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.160 { 00:17:32.160 "cntlid": 99, 00:17:32.160 "qid": 0, 00:17:32.160 "state": "enabled", 00:17:32.160 "thread": "nvmf_tgt_poll_group_000", 00:17:32.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:32.160 "listen_address": { 00:17:32.160 "trtype": "TCP", 00:17:32.160 "adrfam": "IPv4", 00:17:32.160 "traddr": "10.0.0.2", 00:17:32.160 "trsvcid": "4420" 00:17:32.160 }, 00:17:32.160 "peer_address": { 00:17:32.160 "trtype": "TCP", 00:17:32.160 "adrfam": "IPv4", 00:17:32.160 "traddr": "10.0.0.1", 00:17:32.160 "trsvcid": "42240" 00:17:32.160 }, 00:17:32.160 "auth": { 00:17:32.160 "state": "completed", 00:17:32.160 "digest": "sha512", 00:17:32.160 "dhgroup": "null" 00:17:32.161 } 00:17:32.161 } 00:17:32.161 ]' 00:17:32.161 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.161 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.161 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.161 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.161 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.161 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.161 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.161 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.419 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:32.419 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:32.984 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.984 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.984 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.984 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.985 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.985 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.985 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.985 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.243 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.501 00:17:33.501 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.501 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.501 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.760 { 00:17:33.760 "cntlid": 101, 00:17:33.760 "qid": 0, 00:17:33.760 "state": "enabled", 00:17:33.760 "thread": "nvmf_tgt_poll_group_000", 00:17:33.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:33.760 "listen_address": { 00:17:33.760 "trtype": "TCP", 00:17:33.760 "adrfam": "IPv4", 00:17:33.760 "traddr": "10.0.0.2", 00:17:33.760 "trsvcid": "4420" 00:17:33.760 }, 00:17:33.760 "peer_address": { 00:17:33.760 "trtype": "TCP", 00:17:33.760 "adrfam": "IPv4", 00:17:33.760 "traddr": "10.0.0.1", 00:17:33.760 "trsvcid": "42268" 00:17:33.760 }, 00:17:33.760 "auth": { 00:17:33.760 "state": "completed", 00:17:33.760 "digest": "sha512", 00:17:33.760 "dhgroup": "null" 00:17:33.760 } 00:17:33.760 } 00:17:33.760 ]' 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.760 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.019 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:34.019 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:34.586 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.586 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.586 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.586 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.586 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.586 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.587 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.587 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.845 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.105 00:17:35.105 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.105 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.105 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.363 { 00:17:35.363 "cntlid": 103, 00:17:35.363 "qid": 0, 00:17:35.363 "state": "enabled", 00:17:35.363 "thread": "nvmf_tgt_poll_group_000", 00:17:35.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:35.363 "listen_address": { 00:17:35.363 "trtype": "TCP", 00:17:35.363 "adrfam": "IPv4", 00:17:35.363 "traddr": "10.0.0.2", 00:17:35.363 "trsvcid": "4420" 00:17:35.363 }, 00:17:35.363 "peer_address": { 00:17:35.363 "trtype": "TCP", 00:17:35.363 "adrfam": "IPv4", 00:17:35.363 "traddr": "10.0.0.1", 00:17:35.363 "trsvcid": "42302" 00:17:35.363 }, 00:17:35.363 "auth": { 00:17:35.363 "state": "completed", 00:17:35.363 "digest": "sha512", 00:17:35.363 "dhgroup": "null" 00:17:35.363 } 00:17:35.363 } 00:17:35.363 ]' 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:35.363 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.621 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.621 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.621 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.621 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:35.621 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.187 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.446 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.704 00:17:36.704 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.704 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.704 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.962 { 00:17:36.962 "cntlid": 105, 00:17:36.962 "qid": 0, 00:17:36.962 "state": "enabled", 00:17:36.962 "thread": "nvmf_tgt_poll_group_000", 00:17:36.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.962 "listen_address": { 00:17:36.962 "trtype": "TCP", 00:17:36.962 "adrfam": "IPv4", 00:17:36.962 "traddr": "10.0.0.2", 00:17:36.962 "trsvcid": "4420" 00:17:36.962 }, 00:17:36.962 "peer_address": { 00:17:36.962 "trtype": "TCP", 00:17:36.962 "adrfam": "IPv4", 00:17:36.962 "traddr": "10.0.0.1", 00:17:36.962 "trsvcid": "48362" 00:17:36.962 }, 00:17:36.962 "auth": { 00:17:36.962 "state": "completed", 00:17:36.962 "digest": "sha512", 00:17:36.962 "dhgroup": "ffdhe2048" 00:17:36.962 } 00:17:36.962 } 00:17:36.962 ]' 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.962 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.220 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.220 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.220 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.221 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:37.221 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:37.787 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.787 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.787 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.787 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.787 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.787 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.787 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.787 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.045 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.304 00:17:38.304 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.304 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.304 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.562 { 00:17:38.562 "cntlid": 107, 00:17:38.562 "qid": 0, 00:17:38.562 "state": "enabled", 00:17:38.562 "thread": "nvmf_tgt_poll_group_000", 00:17:38.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.562 "listen_address": { 00:17:38.562 "trtype": "TCP", 00:17:38.562 "adrfam": "IPv4", 00:17:38.562 "traddr": "10.0.0.2", 00:17:38.562 "trsvcid": "4420" 00:17:38.562 }, 00:17:38.562 "peer_address": { 00:17:38.562 "trtype": "TCP", 00:17:38.562 "adrfam": "IPv4", 00:17:38.562 "traddr": "10.0.0.1", 00:17:38.562 "trsvcid": "48384" 00:17:38.562 }, 00:17:38.562 "auth": { 00:17:38.562 "state": "completed", 00:17:38.562 "digest": "sha512", 00:17:38.562 "dhgroup": "ffdhe2048" 00:17:38.562 } 00:17:38.562 } 00:17:38.562 ]' 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.562 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.821 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.821 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.821 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.821 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:38.821 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:39.389 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.389 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.389 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.389 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.389 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.389 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.389 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.389 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.648 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.908 00:17:39.908 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.908 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.908 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.166 { 00:17:40.166 "cntlid": 109, 00:17:40.166 "qid": 0, 00:17:40.166 "state": "enabled", 00:17:40.166 "thread": "nvmf_tgt_poll_group_000", 00:17:40.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:40.166 "listen_address": { 00:17:40.166 "trtype": "TCP", 00:17:40.166 "adrfam": "IPv4", 00:17:40.166 "traddr": "10.0.0.2", 00:17:40.166 "trsvcid": "4420" 00:17:40.166 }, 00:17:40.166 "peer_address": { 00:17:40.166 "trtype": "TCP", 00:17:40.166 "adrfam": "IPv4", 00:17:40.166 "traddr": "10.0.0.1", 00:17:40.166 "trsvcid": "48416" 00:17:40.166 }, 00:17:40.166 "auth": { 00:17:40.166 "state": "completed", 00:17:40.166 "digest": "sha512", 00:17:40.166 "dhgroup": "ffdhe2048" 00:17:40.166 } 00:17:40.166 } 00:17:40.166 ]' 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.166 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.425 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:40.425 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:40.992 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.992 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.992 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.992 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.992 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.992 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.992 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.992 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.252 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.511 00:17:41.511 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.511 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.511 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.769 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.769 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.769 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.769 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.769 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.769 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.769 { 00:17:41.769 "cntlid": 111, 00:17:41.769 "qid": 0, 00:17:41.769 "state": "enabled", 00:17:41.769 "thread": "nvmf_tgt_poll_group_000", 00:17:41.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.769 "listen_address": { 00:17:41.769 "trtype": "TCP", 00:17:41.769 "adrfam": "IPv4", 00:17:41.769 "traddr": "10.0.0.2", 00:17:41.769 "trsvcid": "4420" 00:17:41.769 }, 00:17:41.769 "peer_address": { 00:17:41.769 "trtype": "TCP", 00:17:41.769 "adrfam": "IPv4", 00:17:41.769 "traddr": "10.0.0.1", 00:17:41.769 "trsvcid": "48454" 00:17:41.769 }, 00:17:41.769 "auth": { 00:17:41.769 "state": "completed", 00:17:41.769 "digest": "sha512", 00:17:41.769 "dhgroup": "ffdhe2048" 00:17:41.769 } 00:17:41.769 } 00:17:41.769 ]' 00:17:41.769 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.769 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.770 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.770 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.770 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.770 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.770 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.770 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.028 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:42.028 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.595 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.854 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.112 00:17:43.112 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.112 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.112 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.371 { 00:17:43.371 "cntlid": 113, 00:17:43.371 "qid": 0, 00:17:43.371 "state": "enabled", 00:17:43.371 "thread": "nvmf_tgt_poll_group_000", 00:17:43.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.371 "listen_address": { 00:17:43.371 "trtype": "TCP", 00:17:43.371 "adrfam": "IPv4", 00:17:43.371 "traddr": "10.0.0.2", 00:17:43.371 "trsvcid": "4420" 00:17:43.371 }, 00:17:43.371 "peer_address": { 00:17:43.371 "trtype": "TCP", 00:17:43.371 "adrfam": "IPv4", 00:17:43.371 "traddr": "10.0.0.1", 00:17:43.371 "trsvcid": "48480" 00:17:43.371 }, 00:17:43.371 "auth": { 00:17:43.371 "state": "completed", 00:17:43.371 "digest": "sha512", 00:17:43.371 "dhgroup": "ffdhe3072" 00:17:43.371 } 00:17:43.371 } 00:17:43.371 ]' 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.371 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.629 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:43.629 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:44.197 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.197 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.197 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.197 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.197 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.197 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.197 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.197 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.455 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.714 00:17:44.714 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.714 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.714 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.973 { 00:17:44.973 "cntlid": 115, 00:17:44.973 "qid": 0, 00:17:44.973 "state": "enabled", 00:17:44.973 "thread": "nvmf_tgt_poll_group_000", 00:17:44.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.973 "listen_address": { 00:17:44.973 "trtype": "TCP", 00:17:44.973 "adrfam": "IPv4", 00:17:44.973 "traddr": "10.0.0.2", 00:17:44.973 "trsvcid": "4420" 00:17:44.973 }, 00:17:44.973 "peer_address": { 00:17:44.973 "trtype": "TCP", 00:17:44.973 "adrfam": "IPv4", 00:17:44.973 "traddr": "10.0.0.1", 00:17:44.973 "trsvcid": "48506" 00:17:44.973 }, 00:17:44.973 "auth": { 00:17:44.973 "state": "completed", 00:17:44.973 "digest": "sha512", 00:17:44.973 "dhgroup": "ffdhe3072" 00:17:44.973 } 00:17:44.973 } 00:17:44.973 ]' 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.973 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.231 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:45.231 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:45.798 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.798 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.798 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.798 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.798 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.798 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.798 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.798 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.057 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.316 00:17:46.316 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.316 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.316 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.574 { 00:17:46.574 "cntlid": 117, 00:17:46.574 "qid": 0, 00:17:46.574 "state": "enabled", 00:17:46.574 "thread": "nvmf_tgt_poll_group_000", 00:17:46.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:46.574 "listen_address": { 00:17:46.574 "trtype": "TCP", 00:17:46.574 "adrfam": "IPv4", 00:17:46.574 "traddr": "10.0.0.2", 00:17:46.574 "trsvcid": "4420" 00:17:46.574 }, 00:17:46.574 "peer_address": { 00:17:46.574 "trtype": "TCP", 00:17:46.574 "adrfam": "IPv4", 00:17:46.574 "traddr": "10.0.0.1", 00:17:46.574 "trsvcid": "48534" 00:17:46.574 }, 00:17:46.574 "auth": { 00:17:46.574 "state": "completed", 00:17:46.574 "digest": "sha512", 00:17:46.574 "dhgroup": "ffdhe3072" 00:17:46.574 } 00:17:46.574 } 00:17:46.574 ]' 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.574 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.833 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:46.833 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:47.400 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.400 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.400 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.400 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.400 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.400 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.400 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.400 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.659 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:47.659 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.660 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.918 00:17:47.918 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.918 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.918 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.177 { 00:17:48.177 "cntlid": 119, 00:17:48.177 "qid": 0, 00:17:48.177 "state": "enabled", 00:17:48.177 "thread": "nvmf_tgt_poll_group_000", 00:17:48.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:48.177 "listen_address": { 00:17:48.177 "trtype": "TCP", 00:17:48.177 "adrfam": "IPv4", 00:17:48.177 "traddr": "10.0.0.2", 00:17:48.177 "trsvcid": "4420" 00:17:48.177 }, 00:17:48.177 "peer_address": { 00:17:48.177 "trtype": "TCP", 00:17:48.177 "adrfam": "IPv4", 00:17:48.177 "traddr": "10.0.0.1", 00:17:48.177 "trsvcid": "35586" 00:17:48.177 }, 00:17:48.177 "auth": { 00:17:48.177 "state": "completed", 00:17:48.177 "digest": "sha512", 00:17:48.177 "dhgroup": "ffdhe3072" 00:17:48.177 } 00:17:48.177 } 00:17:48.177 ]' 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.177 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.435 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:48.435 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.003 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.261 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.520 00:17:49.520 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.520 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.520 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.778 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.778 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.778 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.778 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.778 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.778 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.778 { 00:17:49.778 "cntlid": 121, 00:17:49.778 "qid": 0, 00:17:49.778 "state": "enabled", 00:17:49.778 "thread": "nvmf_tgt_poll_group_000", 00:17:49.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:49.778 "listen_address": { 00:17:49.778 "trtype": "TCP", 00:17:49.778 "adrfam": "IPv4", 00:17:49.778 "traddr": "10.0.0.2", 00:17:49.778 "trsvcid": "4420" 00:17:49.778 }, 00:17:49.778 "peer_address": { 00:17:49.778 "trtype": "TCP", 00:17:49.778 "adrfam": "IPv4", 00:17:49.778 "traddr": "10.0.0.1", 00:17:49.778 "trsvcid": "35618" 00:17:49.778 }, 00:17:49.778 "auth": { 00:17:49.778 "state": "completed", 00:17:49.778 "digest": "sha512", 00:17:49.778 "dhgroup": "ffdhe4096" 00:17:49.778 } 00:17:49.778 } 00:17:49.778 ]' 00:17:49.778 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.779 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.779 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.779 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.779 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.779 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.779 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.779 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.037 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:50.037 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:50.603 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.603 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.603 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.603 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.603 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.604 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.604 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.604 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.862 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.121 00:17:51.121 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.121 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.121 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.380 { 00:17:51.380 "cntlid": 123, 00:17:51.380 "qid": 0, 00:17:51.380 "state": "enabled", 00:17:51.380 "thread": "nvmf_tgt_poll_group_000", 00:17:51.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:51.380 "listen_address": { 00:17:51.380 "trtype": "TCP", 00:17:51.380 "adrfam": "IPv4", 00:17:51.380 "traddr": "10.0.0.2", 00:17:51.380 "trsvcid": "4420" 00:17:51.380 }, 00:17:51.380 "peer_address": { 00:17:51.380 "trtype": "TCP", 00:17:51.380 "adrfam": "IPv4", 00:17:51.380 "traddr": "10.0.0.1", 00:17:51.380 "trsvcid": "35656" 00:17:51.380 }, 00:17:51.380 "auth": { 00:17:51.380 "state": "completed", 00:17:51.380 "digest": "sha512", 00:17:51.380 "dhgroup": "ffdhe4096" 00:17:51.380 } 00:17:51.380 } 00:17:51.380 ]' 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.380 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.638 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:51.638 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:52.205 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.205 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.205 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.205 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.205 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.205 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.205 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.205 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.464 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.722 00:17:52.722 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.722 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.722 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.979 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.979 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.979 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.979 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.979 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.979 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.979 { 00:17:52.980 "cntlid": 125, 00:17:52.980 "qid": 0, 00:17:52.980 "state": "enabled", 00:17:52.980 "thread": "nvmf_tgt_poll_group_000", 00:17:52.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.980 "listen_address": { 00:17:52.980 "trtype": "TCP", 00:17:52.980 "adrfam": "IPv4", 00:17:52.980 "traddr": "10.0.0.2", 00:17:52.980 "trsvcid": "4420" 00:17:52.980 }, 00:17:52.980 "peer_address": { 00:17:52.980 "trtype": "TCP", 00:17:52.980 "adrfam": "IPv4", 00:17:52.980 "traddr": "10.0.0.1", 00:17:52.980 "trsvcid": "35684" 00:17:52.980 }, 00:17:52.980 "auth": { 00:17:52.980 "state": "completed", 00:17:52.980 "digest": "sha512", 00:17:52.980 "dhgroup": "ffdhe4096" 00:17:52.980 } 00:17:52.980 } 00:17:52.980 ]' 00:17:52.980 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.980 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.980 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.980 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.980 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.238 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.238 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.238 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.238 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:53.238 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:17:53.803 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.803 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.803 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.803 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.803 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.803 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.803 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.803 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.062 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.321 00:17:54.321 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.321 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.321 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.579 { 00:17:54.579 "cntlid": 127, 00:17:54.579 "qid": 0, 00:17:54.579 "state": "enabled", 00:17:54.579 "thread": "nvmf_tgt_poll_group_000", 00:17:54.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:54.579 "listen_address": { 00:17:54.579 "trtype": "TCP", 00:17:54.579 "adrfam": "IPv4", 00:17:54.579 "traddr": "10.0.0.2", 00:17:54.579 "trsvcid": "4420" 00:17:54.579 }, 00:17:54.579 "peer_address": { 00:17:54.579 "trtype": "TCP", 00:17:54.579 "adrfam": "IPv4", 00:17:54.579 "traddr": "10.0.0.1", 00:17:54.579 "trsvcid": "35726" 00:17:54.579 }, 00:17:54.579 "auth": { 00:17:54.579 "state": "completed", 00:17:54.579 "digest": "sha512", 00:17:54.579 "dhgroup": "ffdhe4096" 00:17:54.579 } 00:17:54.579 } 00:17:54.579 ]' 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.579 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.579 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.579 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.838 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.838 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.838 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.838 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:54.838 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:17:55.405 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.663 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.663 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.663 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.663 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.663 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.663 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.663 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.663 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.663 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.229 00:17:56.229 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.229 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.229 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.229 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.229 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.229 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.229 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.487 { 00:17:56.487 "cntlid": 129, 00:17:56.487 "qid": 0, 00:17:56.487 "state": "enabled", 00:17:56.487 "thread": "nvmf_tgt_poll_group_000", 00:17:56.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:56.487 "listen_address": { 00:17:56.487 "trtype": "TCP", 00:17:56.487 "adrfam": "IPv4", 00:17:56.487 "traddr": "10.0.0.2", 00:17:56.487 "trsvcid": "4420" 00:17:56.487 }, 00:17:56.487 "peer_address": { 00:17:56.487 "trtype": "TCP", 00:17:56.487 "adrfam": "IPv4", 00:17:56.487 "traddr": "10.0.0.1", 00:17:56.487 "trsvcid": "35738" 00:17:56.487 }, 00:17:56.487 "auth": { 00:17:56.487 "state": "completed", 00:17:56.487 "digest": "sha512", 00:17:56.487 "dhgroup": "ffdhe6144" 00:17:56.487 } 00:17:56.487 } 00:17:56.487 ]' 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.487 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.746 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:56.746 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:17:57.314 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.314 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.314 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.314 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.314 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.314 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.314 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.314 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.572 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:57.572 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.572 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.572 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:57.572 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.572 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.572 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.572 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.573 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.573 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.573 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.573 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.573 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.831 00:17:57.831 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.831 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.831 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.089 { 00:17:58.089 "cntlid": 131, 00:17:58.089 "qid": 0, 00:17:58.089 "state": "enabled", 00:17:58.089 "thread": "nvmf_tgt_poll_group_000", 00:17:58.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:58.089 "listen_address": { 00:17:58.089 "trtype": "TCP", 00:17:58.089 "adrfam": "IPv4", 00:17:58.089 "traddr": "10.0.0.2", 00:17:58.089 "trsvcid": "4420" 00:17:58.089 }, 00:17:58.089 "peer_address": { 00:17:58.089 "trtype": "TCP", 00:17:58.089 "adrfam": "IPv4", 00:17:58.089 "traddr": "10.0.0.1", 00:17:58.089 "trsvcid": "40712" 00:17:58.089 }, 00:17:58.089 "auth": { 00:17:58.089 "state": "completed", 00:17:58.089 "digest": "sha512", 00:17:58.089 "dhgroup": "ffdhe6144" 00:17:58.089 } 00:17:58.089 } 00:17:58.089 ]' 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.089 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.347 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:58.347 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:17:58.911 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.911 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.911 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.911 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.911 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.911 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.911 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.911 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.169 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.427 00:17:59.427 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.685 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.685 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.685 { 00:17:59.685 "cntlid": 133, 00:17:59.685 "qid": 0, 00:17:59.685 "state": "enabled", 00:17:59.685 "thread": "nvmf_tgt_poll_group_000", 00:17:59.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:59.685 "listen_address": { 00:17:59.685 "trtype": "TCP", 00:17:59.685 "adrfam": "IPv4", 00:17:59.685 "traddr": "10.0.0.2", 00:17:59.685 "trsvcid": "4420" 00:17:59.685 }, 00:17:59.685 "peer_address": { 00:17:59.685 "trtype": "TCP", 00:17:59.685 "adrfam": "IPv4", 00:17:59.685 "traddr": "10.0.0.1", 00:17:59.685 "trsvcid": "40742" 00:17:59.685 }, 00:17:59.685 "auth": { 00:17:59.685 "state": "completed", 00:17:59.685 "digest": "sha512", 00:17:59.685 "dhgroup": "ffdhe6144" 00:17:59.685 } 00:17:59.685 } 00:17:59.685 ]' 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.685 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.943 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.943 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.943 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.943 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.943 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.202 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:18:00.202 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:18:00.769 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.769 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.769 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.769 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.769 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.769 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.769 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.769 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.769 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.336 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.336 { 00:18:01.336 "cntlid": 135, 00:18:01.336 "qid": 0, 00:18:01.336 "state": "enabled", 00:18:01.336 "thread": "nvmf_tgt_poll_group_000", 00:18:01.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:01.336 "listen_address": { 00:18:01.336 "trtype": "TCP", 00:18:01.336 "adrfam": "IPv4", 00:18:01.336 "traddr": "10.0.0.2", 00:18:01.336 "trsvcid": "4420" 00:18:01.336 }, 00:18:01.336 "peer_address": { 00:18:01.336 "trtype": "TCP", 00:18:01.336 "adrfam": "IPv4", 00:18:01.336 "traddr": "10.0.0.1", 00:18:01.336 "trsvcid": "40780" 00:18:01.336 }, 00:18:01.336 "auth": { 00:18:01.336 "state": "completed", 00:18:01.336 "digest": "sha512", 00:18:01.336 "dhgroup": "ffdhe6144" 00:18:01.336 } 00:18:01.336 } 00:18:01.336 ]' 00:18:01.336 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.594 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.594 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.594 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.594 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.594 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.594 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.594 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.853 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:18:01.853 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.418 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.676 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.937 00:18:03.236 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.237 { 00:18:03.237 "cntlid": 137, 00:18:03.237 "qid": 0, 00:18:03.237 "state": "enabled", 00:18:03.237 "thread": "nvmf_tgt_poll_group_000", 00:18:03.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:03.237 "listen_address": { 00:18:03.237 "trtype": "TCP", 00:18:03.237 "adrfam": "IPv4", 00:18:03.237 "traddr": "10.0.0.2", 00:18:03.237 "trsvcid": "4420" 00:18:03.237 }, 00:18:03.237 "peer_address": { 00:18:03.237 "trtype": "TCP", 00:18:03.237 "adrfam": "IPv4", 00:18:03.237 "traddr": "10.0.0.1", 00:18:03.237 "trsvcid": "40812" 00:18:03.237 }, 00:18:03.237 "auth": { 00:18:03.237 "state": "completed", 00:18:03.237 "digest": "sha512", 00:18:03.237 "dhgroup": "ffdhe8192" 00:18:03.237 } 00:18:03.237 } 00:18:03.237 ]' 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.237 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.544 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.544 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.544 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.544 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.544 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.544 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:18:03.544 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:18:04.131 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.131 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.131 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.131 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.131 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.131 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.131 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.131 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.389 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.956 00:18:04.956 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.956 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.956 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.215 { 00:18:05.215 "cntlid": 139, 00:18:05.215 "qid": 0, 00:18:05.215 "state": "enabled", 00:18:05.215 "thread": "nvmf_tgt_poll_group_000", 00:18:05.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.215 "listen_address": { 00:18:05.215 "trtype": "TCP", 00:18:05.215 "adrfam": "IPv4", 00:18:05.215 "traddr": "10.0.0.2", 00:18:05.215 "trsvcid": "4420" 00:18:05.215 }, 00:18:05.215 "peer_address": { 00:18:05.215 "trtype": "TCP", 00:18:05.215 "adrfam": "IPv4", 00:18:05.215 "traddr": "10.0.0.1", 00:18:05.215 "trsvcid": "40848" 00:18:05.215 }, 00:18:05.215 "auth": { 00:18:05.215 "state": "completed", 00:18:05.215 "digest": "sha512", 00:18:05.215 "dhgroup": "ffdhe8192" 00:18:05.215 } 00:18:05.215 } 00:18:05.215 ]' 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.215 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.474 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:18:05.474 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: --dhchap-ctrl-secret DHHC-1:02:YTkyZGI2MTk1NzRjZDgyNDBiOTAyYWVhNjQ0ZDFjZmJmODRmOWQyNWNmMWRiOGM3TkyGxQ==: 00:18:06.042 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.042 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.042 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.042 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.042 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.042 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.042 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.042 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.301 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.869 00:18:06.869 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.869 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.869 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.869 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.869 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.869 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.869 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.128 { 00:18:07.128 "cntlid": 141, 00:18:07.128 "qid": 0, 00:18:07.128 "state": "enabled", 00:18:07.128 "thread": "nvmf_tgt_poll_group_000", 00:18:07.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:07.128 "listen_address": { 00:18:07.128 "trtype": "TCP", 00:18:07.128 "adrfam": "IPv4", 00:18:07.128 "traddr": "10.0.0.2", 00:18:07.128 "trsvcid": "4420" 00:18:07.128 }, 00:18:07.128 "peer_address": { 00:18:07.128 "trtype": "TCP", 00:18:07.128 "adrfam": "IPv4", 00:18:07.128 "traddr": "10.0.0.1", 00:18:07.128 "trsvcid": "50970" 00:18:07.128 }, 00:18:07.128 "auth": { 00:18:07.128 "state": "completed", 00:18:07.128 "digest": "sha512", 00:18:07.128 "dhgroup": "ffdhe8192" 00:18:07.128 } 00:18:07.128 } 00:18:07.128 ]' 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.128 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.388 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:18:07.388 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:01:YmNkNzQyYzY3NzcyODhjMGM5MzQwOTMyODU3YzA0Njio43xA: 00:18:07.954 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.954 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.954 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.954 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.954 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.954 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.954 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.954 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.212 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:08.212 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.213 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.780 00:18:08.780 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.780 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.780 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.780 { 00:18:08.780 "cntlid": 143, 00:18:08.780 "qid": 0, 00:18:08.780 "state": "enabled", 00:18:08.780 "thread": "nvmf_tgt_poll_group_000", 00:18:08.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:08.780 "listen_address": { 00:18:08.780 "trtype": "TCP", 00:18:08.780 "adrfam": "IPv4", 00:18:08.780 "traddr": "10.0.0.2", 00:18:08.780 "trsvcid": "4420" 00:18:08.780 }, 00:18:08.780 "peer_address": { 00:18:08.780 "trtype": "TCP", 00:18:08.780 "adrfam": "IPv4", 00:18:08.780 "traddr": "10.0.0.1", 00:18:08.780 "trsvcid": "51002" 00:18:08.780 }, 00:18:08.780 "auth": { 00:18:08.780 "state": "completed", 00:18:08.780 "digest": "sha512", 00:18:08.780 "dhgroup": "ffdhe8192" 00:18:08.780 } 00:18:08.780 } 00:18:08.780 ]' 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.039 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.039 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.039 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.039 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.039 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.039 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:18:09.039 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:18:09.606 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.865 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.433 00:18:10.433 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.433 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.433 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.691 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.691 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.691 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.691 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.691 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.691 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.691 { 00:18:10.691 "cntlid": 145, 00:18:10.691 "qid": 0, 00:18:10.691 "state": "enabled", 00:18:10.691 "thread": "nvmf_tgt_poll_group_000", 00:18:10.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:10.691 "listen_address": { 00:18:10.691 "trtype": "TCP", 00:18:10.691 "adrfam": "IPv4", 00:18:10.691 "traddr": "10.0.0.2", 00:18:10.691 "trsvcid": "4420" 00:18:10.691 }, 00:18:10.691 "peer_address": { 00:18:10.691 "trtype": "TCP", 00:18:10.691 "adrfam": "IPv4", 00:18:10.691 "traddr": "10.0.0.1", 00:18:10.691 "trsvcid": "51038" 00:18:10.691 }, 00:18:10.691 "auth": { 00:18:10.691 "state": "completed", 00:18:10.691 "digest": "sha512", 00:18:10.691 "dhgroup": "ffdhe8192" 00:18:10.691 } 00:18:10.691 } 00:18:10.691 ]' 00:18:10.691 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.691 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.691 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.691 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.691 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.691 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.691 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.691 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.950 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:18:10.950 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDg5ZmY1MzBiYTM4MWJiYjRiNTFkYWU4YTFmY2RiYTk1MWZkN2QyNzBjOWRhNmI1Oox3TQ==: --dhchap-ctrl-secret DHHC-1:03:YjlmODhiNGExNWVjNDJhNmZlNmZlMjgwN2RiNzE5ZDMwZTc3YWU3Njk4NzEwNGFjMzIwYjk1ODljYWNkZDcyNxiKMjg=: 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:11.518 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:12.085 request: 00:18:12.085 { 00:18:12.085 "name": "nvme0", 00:18:12.085 "trtype": "tcp", 00:18:12.085 "traddr": "10.0.0.2", 00:18:12.085 "adrfam": "ipv4", 00:18:12.085 "trsvcid": "4420", 00:18:12.085 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:12.085 "prchk_reftag": false, 00:18:12.085 "prchk_guard": false, 00:18:12.085 "hdgst": false, 00:18:12.085 "ddgst": false, 00:18:12.085 "dhchap_key": "key2", 00:18:12.085 "allow_unrecognized_csi": false, 00:18:12.085 "method": "bdev_nvme_attach_controller", 00:18:12.085 "req_id": 1 00:18:12.085 } 00:18:12.085 Got JSON-RPC error response 00:18:12.085 response: 00:18:12.085 { 00:18:12.085 "code": -5, 00:18:12.085 "message": "Input/output error" 00:18:12.085 } 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.085 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.086 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.654 request: 00:18:12.654 { 00:18:12.654 "name": "nvme0", 00:18:12.654 "trtype": "tcp", 00:18:12.654 "traddr": "10.0.0.2", 00:18:12.654 "adrfam": "ipv4", 00:18:12.654 "trsvcid": "4420", 00:18:12.654 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:12.654 "prchk_reftag": false, 00:18:12.654 "prchk_guard": false, 00:18:12.654 "hdgst": false, 00:18:12.654 "ddgst": false, 00:18:12.654 "dhchap_key": "key1", 00:18:12.654 "dhchap_ctrlr_key": "ckey2", 00:18:12.654 "allow_unrecognized_csi": false, 00:18:12.654 "method": "bdev_nvme_attach_controller", 00:18:12.654 "req_id": 1 00:18:12.654 } 00:18:12.654 Got JSON-RPC error response 00:18:12.654 response: 00:18:12.654 { 00:18:12.654 "code": -5, 00:18:12.654 "message": "Input/output error" 00:18:12.654 } 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.654 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.914 request: 00:18:12.914 { 00:18:12.914 "name": "nvme0", 00:18:12.914 "trtype": "tcp", 00:18:12.914 "traddr": "10.0.0.2", 00:18:12.914 "adrfam": "ipv4", 00:18:12.914 "trsvcid": "4420", 00:18:12.914 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:12.914 "prchk_reftag": false, 00:18:12.914 "prchk_guard": false, 00:18:12.914 "hdgst": false, 00:18:12.914 "ddgst": false, 00:18:12.914 "dhchap_key": "key1", 00:18:12.914 "dhchap_ctrlr_key": "ckey1", 00:18:12.914 "allow_unrecognized_csi": false, 00:18:12.914 "method": "bdev_nvme_attach_controller", 00:18:12.914 "req_id": 1 00:18:12.914 } 00:18:12.914 Got JSON-RPC error response 00:18:12.914 response: 00:18:12.914 { 00:18:12.914 "code": -5, 00:18:12.914 "message": "Input/output error" 00:18:12.914 } 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1671396 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1671396 ']' 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1671396 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.914 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1671396 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1671396' 00:18:13.173 killing process with pid 1671396 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1671396 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1671396 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1693635 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1693635 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1693635 ']' 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.173 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.174 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.174 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.174 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1693635 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1693635 ']' 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.432 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.691 null0 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.359 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.691 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.9KO ]] 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9KO 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HHa 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.tSb ]] 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tSb 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.O7g 00:18:13.959 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.qj1 ]] 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qj1 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6KP 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.960 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.526 nvme0n1 00:18:14.785 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.785 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.785 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.785 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.785 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.785 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.785 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.785 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.785 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.785 { 00:18:14.785 "cntlid": 1, 00:18:14.785 "qid": 0, 00:18:14.785 "state": "enabled", 00:18:14.785 "thread": "nvmf_tgt_poll_group_000", 00:18:14.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:14.785 "listen_address": { 00:18:14.785 "trtype": "TCP", 00:18:14.785 "adrfam": "IPv4", 00:18:14.785 "traddr": "10.0.0.2", 00:18:14.785 "trsvcid": "4420" 00:18:14.785 }, 00:18:14.785 "peer_address": { 00:18:14.785 "trtype": "TCP", 00:18:14.785 "adrfam": "IPv4", 00:18:14.785 "traddr": "10.0.0.1", 00:18:14.785 "trsvcid": "51094" 00:18:14.785 }, 00:18:14.785 "auth": { 00:18:14.785 "state": "completed", 00:18:14.785 "digest": "sha512", 00:18:14.785 "dhgroup": "ffdhe8192" 00:18:14.785 } 00:18:14.785 } 00:18:14.785 ]' 00:18:14.785 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.785 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.044 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.044 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.044 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.045 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.045 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.045 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.304 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:18:15.304 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:15.871 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:16.130 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:16.130 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.131 request: 00:18:16.131 { 00:18:16.131 "name": "nvme0", 00:18:16.131 "trtype": "tcp", 00:18:16.131 "traddr": "10.0.0.2", 00:18:16.131 "adrfam": "ipv4", 00:18:16.131 "trsvcid": "4420", 00:18:16.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:16.131 "prchk_reftag": false, 00:18:16.131 "prchk_guard": false, 00:18:16.131 "hdgst": false, 00:18:16.131 "ddgst": false, 00:18:16.131 "dhchap_key": "key3", 00:18:16.131 "allow_unrecognized_csi": false, 00:18:16.131 "method": "bdev_nvme_attach_controller", 00:18:16.131 "req_id": 1 00:18:16.131 } 00:18:16.131 Got JSON-RPC error response 00:18:16.131 response: 00:18:16.131 { 00:18:16.131 "code": -5, 00:18:16.131 "message": "Input/output error" 00:18:16.131 } 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:16.131 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.388 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.646 request: 00:18:16.646 { 00:18:16.646 "name": "nvme0", 00:18:16.646 "trtype": "tcp", 00:18:16.646 "traddr": "10.0.0.2", 00:18:16.646 "adrfam": "ipv4", 00:18:16.646 "trsvcid": "4420", 00:18:16.646 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:16.646 "prchk_reftag": false, 00:18:16.646 "prchk_guard": false, 00:18:16.646 "hdgst": false, 00:18:16.646 "ddgst": false, 00:18:16.646 "dhchap_key": "key3", 00:18:16.646 "allow_unrecognized_csi": false, 00:18:16.646 "method": "bdev_nvme_attach_controller", 00:18:16.646 "req_id": 1 00:18:16.646 } 00:18:16.646 Got JSON-RPC error response 00:18:16.646 response: 00:18:16.646 { 00:18:16.646 "code": -5, 00:18:16.646 "message": "Input/output error" 00:18:16.646 } 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.646 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:16.905 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:17.164 request: 00:18:17.164 { 00:18:17.164 "name": "nvme0", 00:18:17.164 "trtype": "tcp", 00:18:17.164 "traddr": "10.0.0.2", 00:18:17.164 "adrfam": "ipv4", 00:18:17.164 "trsvcid": "4420", 00:18:17.164 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:17.164 "prchk_reftag": false, 00:18:17.164 "prchk_guard": false, 00:18:17.164 "hdgst": false, 00:18:17.164 "ddgst": false, 00:18:17.164 "dhchap_key": "key0", 00:18:17.164 "dhchap_ctrlr_key": "key1", 00:18:17.164 "allow_unrecognized_csi": false, 00:18:17.164 "method": "bdev_nvme_attach_controller", 00:18:17.164 "req_id": 1 00:18:17.164 } 00:18:17.164 Got JSON-RPC error response 00:18:17.164 response: 00:18:17.164 { 00:18:17.164 "code": -5, 00:18:17.164 "message": "Input/output error" 00:18:17.164 } 00:18:17.164 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:17.164 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.164 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.164 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.164 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:17.164 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:17.164 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:17.422 nvme0n1 00:18:17.422 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:17.422 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:17.422 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.681 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.681 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.681 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.940 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:17.940 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.940 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.940 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.940 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:17.940 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:17.940 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:18.876 nvme0n1 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:18.876 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.135 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.135 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:18:19.135 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: --dhchap-ctrl-secret DHHC-1:03:ZmI5OGFiMmFhY2NhODkzOWUyM2YyMmJjNWU1MDFhOTQxOTlkMDMwYWU4ZWFlZTdmYTI2MzU0ZTY2MDUwYzcyYYQ4hdE=: 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.701 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:19.960 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:20.218 request: 00:18:20.218 { 00:18:20.218 "name": "nvme0", 00:18:20.218 "trtype": "tcp", 00:18:20.218 "traddr": "10.0.0.2", 00:18:20.218 "adrfam": "ipv4", 00:18:20.218 "trsvcid": "4420", 00:18:20.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:20.218 "prchk_reftag": false, 00:18:20.218 "prchk_guard": false, 00:18:20.218 "hdgst": false, 00:18:20.218 "ddgst": false, 00:18:20.218 "dhchap_key": "key1", 00:18:20.218 "allow_unrecognized_csi": false, 00:18:20.218 "method": "bdev_nvme_attach_controller", 00:18:20.218 "req_id": 1 00:18:20.218 } 00:18:20.218 Got JSON-RPC error response 00:18:20.218 response: 00:18:20.218 { 00:18:20.218 "code": -5, 00:18:20.218 "message": "Input/output error" 00:18:20.218 } 00:18:20.476 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:20.476 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.476 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.476 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.476 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:20.476 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:20.476 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.043 nvme0n1 00:18:21.043 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:21.043 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:21.043 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.301 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.301 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.301 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.560 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.560 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.560 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.560 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.560 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:21.560 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:21.560 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:21.818 nvme0n1 00:18:21.818 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:21.818 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:21.818 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.076 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.076 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.076 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: '' 2s 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: ]] 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjUzMWRlYmM1YzJlY2VjM2ZhMDI0YjllMDRkYzdlOGUPEylu: 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:22.333 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: 2s 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: ]] 00:18:24.232 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZWNjZTM0YzgwZmI0NzBjNTc2YTAwMWJjNjFlNGE1YzcyMDA5MWY5ZGNlODE3MzgwZqbY/Q==: 00:18:24.233 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:24.233 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:26.763 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:27.022 nvme0n1 00:18:27.022 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.022 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.022 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.022 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.022 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.022 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.588 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:27.588 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:27.588 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.846 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.846 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.846 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.846 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.846 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.846 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:27.846 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.105 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.672 request: 00:18:28.672 { 00:18:28.672 "name": "nvme0", 00:18:28.672 "dhchap_key": "key1", 00:18:28.672 "dhchap_ctrlr_key": "key3", 00:18:28.672 "method": "bdev_nvme_set_keys", 00:18:28.672 "req_id": 1 00:18:28.672 } 00:18:28.672 Got JSON-RPC error response 00:18:28.672 response: 00:18:28.672 { 00:18:28.672 "code": -13, 00:18:28.672 "message": "Permission denied" 00:18:28.672 } 00:18:28.672 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.672 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.672 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.672 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.672 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:28.672 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:28.672 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.930 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:28.930 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:29.864 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:29.864 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:29.864 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.123 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:30.123 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:30.123 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.123 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.123 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.123 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:30.123 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:30.123 10:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.057 nvme0n1 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.057 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.316 request: 00:18:31.316 { 00:18:31.316 "name": "nvme0", 00:18:31.316 "dhchap_key": "key2", 00:18:31.316 "dhchap_ctrlr_key": "key0", 00:18:31.316 "method": "bdev_nvme_set_keys", 00:18:31.316 "req_id": 1 00:18:31.316 } 00:18:31.316 Got JSON-RPC error response 00:18:31.316 response: 00:18:31.316 { 00:18:31.316 "code": -13, 00:18:31.316 "message": "Permission denied" 00:18:31.316 } 00:18:31.316 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:31.316 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.316 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.316 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.316 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:31.316 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:31.316 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.575 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:31.575 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:32.509 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:32.509 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:32.509 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.768 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:32.768 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1671575 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1671575 ']' 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1671575 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1671575 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1671575' 00:18:32.769 killing process with pid 1671575 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1671575 00:18:32.769 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1671575 00:18:33.027 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:33.027 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.027 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:33.027 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.027 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:33.027 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.027 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.027 rmmod nvme_tcp 00:18:33.027 rmmod nvme_fabrics 00:18:33.027 rmmod nvme_keyring 00:18:33.027 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1693635 ']' 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1693635 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1693635 ']' 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1693635 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1693635 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1693635' 00:18:33.286 killing process with pid 1693635 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1693635 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1693635 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.286 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.359 /tmp/spdk.key-sha256.HHa /tmp/spdk.key-sha384.O7g /tmp/spdk.key-sha512.6KP /tmp/spdk.key-sha512.9KO /tmp/spdk.key-sha384.tSb /tmp/spdk.key-sha256.qj1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:35.823 00:18:35.823 real 2m34.043s 00:18:35.823 user 5m55.199s 00:18:35.823 sys 0m24.590s 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.823 ************************************ 00:18:35.823 END TEST nvmf_auth_target 00:18:35.823 ************************************ 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.823 ************************************ 00:18:35.823 START TEST nvmf_bdevio_no_huge 00:18:35.823 ************************************ 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:35.823 * Looking for test storage... 00:18:35.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:35.823 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:35.823 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:35.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.824 --rc genhtml_branch_coverage=1 00:18:35.824 --rc genhtml_function_coverage=1 00:18:35.824 --rc genhtml_legend=1 00:18:35.824 --rc geninfo_all_blocks=1 00:18:35.824 --rc geninfo_unexecuted_blocks=1 00:18:35.824 00:18:35.824 ' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:35.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.824 --rc genhtml_branch_coverage=1 00:18:35.824 --rc genhtml_function_coverage=1 00:18:35.824 --rc genhtml_legend=1 00:18:35.824 --rc geninfo_all_blocks=1 00:18:35.824 --rc geninfo_unexecuted_blocks=1 00:18:35.824 00:18:35.824 ' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:35.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.824 --rc genhtml_branch_coverage=1 00:18:35.824 --rc genhtml_function_coverage=1 00:18:35.824 --rc genhtml_legend=1 00:18:35.824 --rc geninfo_all_blocks=1 00:18:35.824 --rc geninfo_unexecuted_blocks=1 00:18:35.824 00:18:35.824 ' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:35.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.824 --rc genhtml_branch_coverage=1 00:18:35.824 --rc genhtml_function_coverage=1 00:18:35.824 --rc genhtml_legend=1 00:18:35.824 --rc geninfo_all_blocks=1 00:18:35.824 --rc geninfo_unexecuted_blocks=1 00:18:35.824 00:18:35.824 ' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:35.824 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.388 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:42.389 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:42.389 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:42.389 Found net devices under 0000:86:00.0: cvl_0_0 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:42.389 Found net devices under 0000:86:00.1: cvl_0_1 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:42.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:18:42.389 00:18:42.389 --- 10.0.0.2 ping statistics --- 00:18:42.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.389 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:18:42.389 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:18:42.389 00:18:42.389 --- 10.0.0.1 ping statistics --- 00:18:42.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.389 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:42.389 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1700513 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1700513 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1700513 ']' 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.390 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.390 [2024-11-19 10:45:49.105873] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:18:42.390 [2024-11-19 10:45:49.105919] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:42.390 [2024-11-19 10:45:49.190124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.390 [2024-11-19 10:45:49.237595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.390 [2024-11-19 10:45:49.237630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.390 [2024-11-19 10:45:49.237637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.390 [2024-11-19 10:45:49.237643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.390 [2024-11-19 10:45:49.237648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.390 [2024-11-19 10:45:49.238884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:42.390 [2024-11-19 10:45:49.238995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:42.390 [2024-11-19 10:45:49.239103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.390 [2024-11-19 10:45:49.239103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.647 10:45:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.647 [2024-11-19 10:45:50.001744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.647 Malloc0 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.647 [2024-11-19 10:45:50.046615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.647 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:42.648 { 00:18:42.648 "params": { 00:18:42.648 "name": "Nvme$subsystem", 00:18:42.648 "trtype": "$TEST_TRANSPORT", 00:18:42.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:42.648 "adrfam": "ipv4", 00:18:42.648 "trsvcid": "$NVMF_PORT", 00:18:42.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:42.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:42.648 "hdgst": ${hdgst:-false}, 00:18:42.648 "ddgst": ${ddgst:-false} 00:18:42.648 }, 00:18:42.648 "method": "bdev_nvme_attach_controller" 00:18:42.648 } 00:18:42.648 EOF 00:18:42.648 )") 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:42.648 10:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:42.648 "params": { 00:18:42.648 "name": "Nvme1", 00:18:42.648 "trtype": "tcp", 00:18:42.648 "traddr": "10.0.0.2", 00:18:42.648 "adrfam": "ipv4", 00:18:42.648 "trsvcid": "4420", 00:18:42.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.648 "hdgst": false, 00:18:42.648 "ddgst": false 00:18:42.648 }, 00:18:42.648 "method": "bdev_nvme_attach_controller" 00:18:42.648 }' 00:18:42.905 [2024-11-19 10:45:50.097412] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:18:42.905 [2024-11-19 10:45:50.097456] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1700765 ] 00:18:42.905 [2024-11-19 10:45:50.176548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.905 [2024-11-19 10:45:50.225631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.905 [2024-11-19 10:45:50.225742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.905 [2024-11-19 10:45:50.225743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.163 I/O targets: 00:18:43.163 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:43.163 00:18:43.163 00:18:43.163 CUnit - A unit testing framework for C - Version 2.1-3 00:18:43.163 http://cunit.sourceforge.net/ 00:18:43.163 00:18:43.163 00:18:43.163 Suite: bdevio tests on: Nvme1n1 00:18:43.163 Test: blockdev write read block ...passed 00:18:43.163 Test: blockdev write zeroes read block ...passed 00:18:43.421 Test: blockdev write zeroes read no split ...passed 00:18:43.421 Test: blockdev write zeroes read split ...passed 00:18:43.421 Test: blockdev write zeroes read split partial ...passed 00:18:43.421 Test: blockdev reset ...[2024-11-19 10:45:50.680665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:43.421 [2024-11-19 10:45:50.680729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6c920 (9): Bad file descriptor 00:18:43.421 [2024-11-19 10:45:50.694293] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:43.421 passed 00:18:43.421 Test: blockdev write read 8 blocks ...passed 00:18:43.421 Test: blockdev write read size > 128k ...passed 00:18:43.421 Test: blockdev write read invalid size ...passed 00:18:43.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:43.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:43.421 Test: blockdev write read max offset ...passed 00:18:43.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:43.679 Test: blockdev writev readv 8 blocks ...passed 00:18:43.679 Test: blockdev writev readv 30 x 1block ...passed 00:18:43.679 Test: blockdev writev readv block ...passed 00:18:43.679 Test: blockdev writev readv size > 128k ...passed 00:18:43.679 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:43.679 Test: blockdev comparev and writev ...[2024-11-19 10:45:50.948744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.679 [2024-11-19 10:45:50.948774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.679 [2024-11-19 10:45:50.948788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.679 [2024-11-19 10:45:50.948797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:43.679 [2024-11-19 10:45:50.949047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.679 [2024-11-19 10:45:50.949058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:43.679 [2024-11-19 10:45:50.949070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.679 [2024-11-19 10:45:50.949077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:43.679 [2024-11-19 10:45:50.949320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.679 [2024-11-19 10:45:50.949331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:43.679 [2024-11-19 10:45:50.949343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.679 [2024-11-19 10:45:50.949350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:43.680 [2024-11-19 10:45:50.949591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.680 [2024-11-19 10:45:50.949603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:43.680 [2024-11-19 10:45:50.949622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.680 [2024-11-19 10:45:50.949630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:43.680 passed 00:18:43.680 Test: blockdev nvme passthru rw ...passed 00:18:43.680 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:45:51.032330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:43.680 [2024-11-19 10:45:51.032347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:43.680 [2024-11-19 10:45:51.032452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:43.680 [2024-11-19 10:45:51.032463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:43.680 [2024-11-19 10:45:51.032564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:43.680 [2024-11-19 10:45:51.032574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:43.680 [2024-11-19 10:45:51.032672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:43.680 [2024-11-19 10:45:51.032683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:43.680 passed 00:18:43.680 Test: blockdev nvme admin passthru ...passed 00:18:43.680 Test: blockdev copy ...passed 00:18:43.680 00:18:43.680 Run Summary: Type Total Ran Passed Failed Inactive 00:18:43.680 suites 1 1 n/a 0 0 00:18:43.680 tests 23 23 23 0 0 00:18:43.680 asserts 152 152 152 0 n/a 00:18:43.680 00:18:43.680 Elapsed time = 1.164 seconds 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.937 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.937 rmmod nvme_tcp 00:18:44.195 rmmod nvme_fabrics 00:18:44.195 rmmod nvme_keyring 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1700513 ']' 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1700513 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1700513 ']' 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1700513 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1700513 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1700513' 00:18:44.195 killing process with pid 1700513 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1700513 00:18:44.195 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1700513 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.454 10:45:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.991 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.991 00:18:46.991 real 0m11.024s 00:18:46.991 user 0m14.263s 00:18:46.991 sys 0m5.401s 00:18:46.991 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.991 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.991 ************************************ 00:18:46.991 END TEST nvmf_bdevio_no_huge 00:18:46.991 ************************************ 00:18:46.991 10:45:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:46.991 10:45:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:46.991 10:45:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.991 10:45:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.991 ************************************ 00:18:46.991 START TEST nvmf_tls 00:18:46.991 ************************************ 00:18:46.991 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:46.991 * Looking for test storage... 00:18:46.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:46.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.991 --rc genhtml_branch_coverage=1 00:18:46.991 --rc genhtml_function_coverage=1 00:18:46.991 --rc genhtml_legend=1 00:18:46.991 --rc geninfo_all_blocks=1 00:18:46.991 --rc geninfo_unexecuted_blocks=1 00:18:46.991 00:18:46.991 ' 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:46.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.991 --rc genhtml_branch_coverage=1 00:18:46.991 --rc genhtml_function_coverage=1 00:18:46.991 --rc genhtml_legend=1 00:18:46.991 --rc geninfo_all_blocks=1 00:18:46.991 --rc geninfo_unexecuted_blocks=1 00:18:46.991 00:18:46.991 ' 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:46.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.991 --rc genhtml_branch_coverage=1 00:18:46.991 --rc genhtml_function_coverage=1 00:18:46.991 --rc genhtml_legend=1 00:18:46.991 --rc geninfo_all_blocks=1 00:18:46.991 --rc geninfo_unexecuted_blocks=1 00:18:46.991 00:18:46.991 ' 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:46.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.991 --rc genhtml_branch_coverage=1 00:18:46.991 --rc genhtml_function_coverage=1 00:18:46.991 --rc genhtml_legend=1 00:18:46.991 --rc geninfo_all_blocks=1 00:18:46.991 --rc geninfo_unexecuted_blocks=1 00:18:46.991 00:18:46.991 ' 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.991 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.992 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:53.683 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:53.683 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:53.683 Found net devices under 0000:86:00.0: cvl_0_0 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.683 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:53.684 Found net devices under 0000:86:00.1: cvl_0_1 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:53.684 10:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:53.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:18:53.684 00:18:53.684 --- 10.0.0.2 ping statistics --- 00:18:53.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.684 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:18:53.684 00:18:53.684 --- 10.0.0.1 ping statistics --- 00:18:53.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.684 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1704532 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1704532 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1704532 ']' 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.684 [2024-11-19 10:46:00.186956] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:18:53.684 [2024-11-19 10:46:00.186997] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.684 [2024-11-19 10:46:00.266178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.684 [2024-11-19 10:46:00.307186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.684 [2024-11-19 10:46:00.307224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.684 [2024-11-19 10:46:00.307231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.684 [2024-11-19 10:46:00.307237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.684 [2024-11-19 10:46:00.307242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.684 [2024-11-19 10:46:00.307809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:53.684 true 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.684 10:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:53.942 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:53.942 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:53.942 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:53.942 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.942 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:54.200 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:54.200 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:54.200 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.200 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:54.458 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:54.458 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:54.458 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:54.716 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.716 10:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:54.716 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:54.716 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:54.716 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:54.975 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.975 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.GgEl8KWgZx 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.mTi2bX3ntL 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GgEl8KWgZx 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.mTi2bX3ntL 00:18:55.235 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:55.493 10:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:55.751 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.GgEl8KWgZx 00:18:55.751 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GgEl8KWgZx 00:18:55.751 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:56.009 [2024-11-19 10:46:03.242504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.009 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:56.009 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:56.268 [2024-11-19 10:46:03.631500] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.268 [2024-11-19 10:46:03.631706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.268 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:56.526 malloc0 00:18:56.526 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:56.785 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GgEl8KWgZx 00:18:56.785 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:57.044 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GgEl8KWgZx 00:19:09.252 Initializing NVMe Controllers 00:19:09.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:09.252 Initialization complete. Launching workers. 00:19:09.252 ======================================================== 00:19:09.252 Latency(us) 00:19:09.252 Device Information : IOPS MiB/s Average min max 00:19:09.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15996.89 62.49 4000.88 865.71 205271.37 00:19:09.252 ======================================================== 00:19:09.252 Total : 15996.89 62.49 4000.88 865.71 205271.37 00:19:09.252 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GgEl8KWgZx 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GgEl8KWgZx 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1706881 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1706881 /var/tmp/bdevperf.sock 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1706881 ']' 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.252 [2024-11-19 10:46:14.554920] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:09.252 [2024-11-19 10:46:14.554974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706881 ] 00:19:09.252 [2024-11-19 10:46:14.630175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.252 [2024-11-19 10:46:14.672261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GgEl8KWgZx 00:19:09.252 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:09.252 [2024-11-19 10:46:15.128327] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.252 TLSTESTn1 00:19:09.252 10:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:09.252 Running I/O for 10 seconds... 00:19:10.188 5324.00 IOPS, 20.80 MiB/s [2024-11-19T09:46:18.572Z] 5421.00 IOPS, 21.18 MiB/s [2024-11-19T09:46:19.509Z] 5447.00 IOPS, 21.28 MiB/s [2024-11-19T09:46:20.445Z] 5426.75 IOPS, 21.20 MiB/s [2024-11-19T09:46:21.381Z] 5397.20 IOPS, 21.08 MiB/s [2024-11-19T09:46:22.757Z] 5408.00 IOPS, 21.12 MiB/s [2024-11-19T09:46:23.693Z] 5365.43 IOPS, 20.96 MiB/s [2024-11-19T09:46:24.629Z] 5292.25 IOPS, 20.67 MiB/s [2024-11-19T09:46:25.566Z] 5237.00 IOPS, 20.46 MiB/s [2024-11-19T09:46:25.566Z] 5164.30 IOPS, 20.17 MiB/s 00:19:18.117 Latency(us) 00:19:18.117 [2024-11-19T09:46:25.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.117 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.117 Verification LBA range: start 0x0 length 0x2000 00:19:18.117 TLSTESTn1 : 10.02 5166.49 20.18 0.00 0.00 24732.53 4872.46 30317.52 00:19:18.117 [2024-11-19T09:46:25.566Z] =================================================================================================================== 00:19:18.117 [2024-11-19T09:46:25.566Z] Total : 5166.49 20.18 0.00 0.00 24732.53 4872.46 30317.52 00:19:18.117 { 00:19:18.117 "results": [ 00:19:18.117 { 00:19:18.117 "job": "TLSTESTn1", 00:19:18.117 "core_mask": "0x4", 00:19:18.117 "workload": "verify", 00:19:18.117 "status": "finished", 00:19:18.117 "verify_range": { 00:19:18.117 "start": 0, 00:19:18.117 "length": 8192 00:19:18.117 }, 00:19:18.117 "queue_depth": 128, 00:19:18.117 "io_size": 4096, 00:19:18.117 "runtime": 10.019956, 00:19:18.117 "iops": 5166.489753048816, 00:19:18.117 "mibps": 20.181600597846938, 00:19:18.117 "io_failed": 0, 00:19:18.117 "io_timeout": 0, 00:19:18.117 "avg_latency_us": 24732.530756485456, 00:19:18.117 "min_latency_us": 4872.459130434782, 00:19:18.117 "max_latency_us": 30317.52347826087 00:19:18.117 } 00:19:18.117 ], 00:19:18.117 "core_count": 1 00:19:18.117 } 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1706881 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1706881 ']' 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1706881 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706881 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706881' 00:19:18.117 killing process with pid 1706881 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1706881 00:19:18.117 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.117 00:19:18.117 Latency(us) 00:19:18.117 [2024-11-19T09:46:25.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.117 [2024-11-19T09:46:25.566Z] =================================================================================================================== 00:19:18.117 [2024-11-19T09:46:25.566Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.117 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1706881 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mTi2bX3ntL 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mTi2bX3ntL 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mTi2bX3ntL 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mTi2bX3ntL 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1708719 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1708719 /var/tmp/bdevperf.sock 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1708719 ']' 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.377 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.377 [2024-11-19 10:46:25.642821] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:18.377 [2024-11-19 10:46:25.642869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708719 ] 00:19:18.377 [2024-11-19 10:46:25.716467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.377 [2024-11-19 10:46:25.756463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.636 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.636 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.636 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mTi2bX3ntL 00:19:18.636 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.895 [2024-11-19 10:46:26.219637] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.895 [2024-11-19 10:46:26.230728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:18.895 [2024-11-19 10:46:26.231013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f8170 (107): Transport endpoint is not connected 00:19:18.895 [2024-11-19 10:46:26.232007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f8170 (9): Bad file descriptor 00:19:18.896 [2024-11-19 10:46:26.233009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:18.896 [2024-11-19 10:46:26.233020] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:18.896 [2024-11-19 10:46:26.233027] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:18.896 [2024-11-19 10:46:26.233037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:18.896 request: 00:19:18.896 { 00:19:18.896 "name": "TLSTEST", 00:19:18.896 "trtype": "tcp", 00:19:18.896 "traddr": "10.0.0.2", 00:19:18.896 "adrfam": "ipv4", 00:19:18.896 "trsvcid": "4420", 00:19:18.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.896 "prchk_reftag": false, 00:19:18.896 "prchk_guard": false, 00:19:18.896 "hdgst": false, 00:19:18.896 "ddgst": false, 00:19:18.896 "psk": "key0", 00:19:18.896 "allow_unrecognized_csi": false, 00:19:18.896 "method": "bdev_nvme_attach_controller", 00:19:18.896 "req_id": 1 00:19:18.896 } 00:19:18.896 Got JSON-RPC error response 00:19:18.896 response: 00:19:18.896 { 00:19:18.896 "code": -5, 00:19:18.896 "message": "Input/output error" 00:19:18.896 } 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1708719 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1708719 ']' 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1708719 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708719 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708719' 00:19:18.896 killing process with pid 1708719 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1708719 00:19:18.896 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.896 00:19:18.896 Latency(us) 00:19:18.896 [2024-11-19T09:46:26.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.896 [2024-11-19T09:46:26.345Z] =================================================================================================================== 00:19:18.896 [2024-11-19T09:46:26.345Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.896 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1708719 00:19:19.155 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:19.155 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GgEl8KWgZx 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GgEl8KWgZx 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GgEl8KWgZx 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GgEl8KWgZx 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1708740 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1708740 /var/tmp/bdevperf.sock 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1708740 ']' 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.156 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.156 [2024-11-19 10:46:26.513006] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:19.156 [2024-11-19 10:46:26.513055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708740 ] 00:19:19.156 [2024-11-19 10:46:26.586357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.415 [2024-11-19 10:46:26.627495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.415 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.415 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:19.415 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GgEl8KWgZx 00:19:19.674 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:19.674 [2024-11-19 10:46:27.102354] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.674 [2024-11-19 10:46:27.112848] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:19.674 [2024-11-19 10:46:27.112870] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:19.674 [2024-11-19 10:46:27.112891] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:19.674 [2024-11-19 10:46:27.113714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1876170 (107): Transport endpoint is not connected 00:19:19.674 [2024-11-19 10:46:27.114707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1876170 (9): Bad file descriptor 00:19:19.674 [2024-11-19 10:46:27.115707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:19.674 [2024-11-19 10:46:27.115718] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:19.674 [2024-11-19 10:46:27.115725] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:19.675 [2024-11-19 10:46:27.115735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:19.675 request: 00:19:19.675 { 00:19:19.675 "name": "TLSTEST", 00:19:19.675 "trtype": "tcp", 00:19:19.675 "traddr": "10.0.0.2", 00:19:19.675 "adrfam": "ipv4", 00:19:19.675 "trsvcid": "4420", 00:19:19.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.675 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:19.675 "prchk_reftag": false, 00:19:19.675 "prchk_guard": false, 00:19:19.675 "hdgst": false, 00:19:19.675 "ddgst": false, 00:19:19.675 "psk": "key0", 00:19:19.675 "allow_unrecognized_csi": false, 00:19:19.675 "method": "bdev_nvme_attach_controller", 00:19:19.675 "req_id": 1 00:19:19.675 } 00:19:19.675 Got JSON-RPC error response 00:19:19.675 response: 00:19:19.675 { 00:19:19.675 "code": -5, 00:19:19.675 "message": "Input/output error" 00:19:19.675 } 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1708740 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1708740 ']' 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1708740 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708740 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708740' 00:19:19.934 killing process with pid 1708740 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1708740 00:19:19.934 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.934 00:19:19.934 Latency(us) 00:19:19.934 [2024-11-19T09:46:27.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.934 [2024-11-19T09:46:27.383Z] =================================================================================================================== 00:19:19.934 [2024-11-19T09:46:27.383Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1708740 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GgEl8KWgZx 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GgEl8KWgZx 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GgEl8KWgZx 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GgEl8KWgZx 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1708971 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1708971 /var/tmp/bdevperf.sock 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1708971 ']' 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.934 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.200 [2024-11-19 10:46:27.399992] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:20.200 [2024-11-19 10:46:27.400040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708971 ] 00:19:20.200 [2024-11-19 10:46:27.476901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.200 [2024-11-19 10:46:27.514357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.200 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.200 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.200 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GgEl8KWgZx 00:19:20.460 10:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.719 [2024-11-19 10:46:27.981612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.719 [2024-11-19 10:46:27.986180] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:20.719 [2024-11-19 10:46:27.986203] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:20.719 [2024-11-19 10:46:27.986227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:20.719 [2024-11-19 10:46:27.986969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109d170 (107): Transport endpoint is not connected 00:19:20.719 [2024-11-19 10:46:27.987959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109d170 (9): Bad file descriptor 00:19:20.719 [2024-11-19 10:46:27.988960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:20.719 [2024-11-19 10:46:27.988971] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:20.719 [2024-11-19 10:46:27.988978] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:20.719 [2024-11-19 10:46:27.988989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:20.719 request: 00:19:20.719 { 00:19:20.719 "name": "TLSTEST", 00:19:20.719 "trtype": "tcp", 00:19:20.719 "traddr": "10.0.0.2", 00:19:20.719 "adrfam": "ipv4", 00:19:20.719 "trsvcid": "4420", 00:19:20.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:20.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.719 "prchk_reftag": false, 00:19:20.719 "prchk_guard": false, 00:19:20.719 "hdgst": false, 00:19:20.719 "ddgst": false, 00:19:20.719 "psk": "key0", 00:19:20.719 "allow_unrecognized_csi": false, 00:19:20.719 "method": "bdev_nvme_attach_controller", 00:19:20.719 "req_id": 1 00:19:20.719 } 00:19:20.719 Got JSON-RPC error response 00:19:20.719 response: 00:19:20.719 { 00:19:20.719 "code": -5, 00:19:20.719 "message": "Input/output error" 00:19:20.719 } 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1708971 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1708971 ']' 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1708971 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708971 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708971' 00:19:20.719 killing process with pid 1708971 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1708971 00:19:20.719 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.719 00:19:20.719 Latency(us) 00:19:20.719 [2024-11-19T09:46:28.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.719 [2024-11-19T09:46:28.168Z] =================================================================================================================== 00:19:20.719 [2024-11-19T09:46:28.168Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.719 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1708971 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1709140 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1709140 /var/tmp/bdevperf.sock 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1709140 ']' 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.979 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.979 [2024-11-19 10:46:28.270590] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:20.979 [2024-11-19 10:46:28.270642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709140 ] 00:19:20.979 [2024-11-19 10:46:28.346684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.979 [2024-11-19 10:46:28.385753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.238 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.238 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:21.238 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:21.238 [2024-11-19 10:46:28.660148] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:21.238 [2024-11-19 10:46:28.660181] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:21.238 request: 00:19:21.238 { 00:19:21.238 "name": "key0", 00:19:21.238 "path": "", 00:19:21.238 "method": "keyring_file_add_key", 00:19:21.238 "req_id": 1 00:19:21.238 } 00:19:21.238 Got JSON-RPC error response 00:19:21.238 response: 00:19:21.238 { 00:19:21.238 "code": -1, 00:19:21.238 "message": "Operation not permitted" 00:19:21.238 } 00:19:21.497 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.497 [2024-11-19 10:46:28.860751] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.497 [2024-11-19 10:46:28.860781] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:21.497 request: 00:19:21.497 { 00:19:21.497 "name": "TLSTEST", 00:19:21.497 "trtype": "tcp", 00:19:21.497 "traddr": "10.0.0.2", 00:19:21.497 "adrfam": "ipv4", 00:19:21.497 "trsvcid": "4420", 00:19:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.498 "prchk_reftag": false, 00:19:21.498 "prchk_guard": false, 00:19:21.498 "hdgst": false, 00:19:21.498 "ddgst": false, 00:19:21.498 "psk": "key0", 00:19:21.498 "allow_unrecognized_csi": false, 00:19:21.498 "method": "bdev_nvme_attach_controller", 00:19:21.498 "req_id": 1 00:19:21.498 } 00:19:21.498 Got JSON-RPC error response 00:19:21.498 response: 00:19:21.498 { 00:19:21.498 "code": -126, 00:19:21.498 "message": "Required key not available" 00:19:21.498 } 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1709140 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1709140 ']' 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1709140 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709140 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709140' 00:19:21.498 killing process with pid 1709140 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1709140 00:19:21.498 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.498 00:19:21.498 Latency(us) 00:19:21.498 [2024-11-19T09:46:28.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.498 [2024-11-19T09:46:28.947Z] =================================================================================================================== 00:19:21.498 [2024-11-19T09:46:28.947Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.498 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1709140 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1704532 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1704532 ']' 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1704532 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1704532 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1704532' 00:19:21.757 killing process with pid 1704532 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1704532 00:19:21.757 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1704532 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.NoflwE3gor 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.NoflwE3gor 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1709251 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1709251 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1709251 ']' 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.017 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.017 [2024-11-19 10:46:29.427024] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:22.017 [2024-11-19 10:46:29.427073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.276 [2024-11-19 10:46:29.507969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.276 [2024-11-19 10:46:29.546523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.276 [2024-11-19 10:46:29.546558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.276 [2024-11-19 10:46:29.546566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.276 [2024-11-19 10:46:29.546573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.276 [2024-11-19 10:46:29.546578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.276 [2024-11-19 10:46:29.547111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.NoflwE3gor 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NoflwE3gor 00:19:22.276 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:22.535 [2024-11-19 10:46:29.867409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.535 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:22.794 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:23.052 [2024-11-19 10:46:30.256424] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.052 [2024-11-19 10:46:30.256618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.052 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:23.052 malloc0 00:19:23.052 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:23.311 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:23.570 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:23.570 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NoflwE3gor 00:19:23.570 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.570 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.570 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:23.570 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NoflwE3gor 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1709632 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1709632 /var/tmp/bdevperf.sock 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1709632 ']' 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.829 [2024-11-19 10:46:31.064727] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:23.829 [2024-11-19 10:46:31.064776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709632 ] 00:19:23.829 [2024-11-19 10:46:31.138158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.829 [2024-11-19 10:46:31.180649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.829 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:24.087 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.346 [2024-11-19 10:46:31.640628] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.346 TLSTESTn1 00:19:24.346 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:24.605 Running I/O for 10 seconds... 00:19:26.476 5313.00 IOPS, 20.75 MiB/s [2024-11-19T09:46:34.863Z] 5406.50 IOPS, 21.12 MiB/s [2024-11-19T09:46:36.239Z] 5425.33 IOPS, 21.19 MiB/s [2024-11-19T09:46:37.175Z] 5389.00 IOPS, 21.05 MiB/s [2024-11-19T09:46:38.109Z] 5314.20 IOPS, 20.76 MiB/s [2024-11-19T09:46:39.044Z] 5249.00 IOPS, 20.50 MiB/s [2024-11-19T09:46:39.978Z] 5245.43 IOPS, 20.49 MiB/s [2024-11-19T09:46:40.911Z] 5217.75 IOPS, 20.38 MiB/s [2024-11-19T09:46:42.288Z] 5205.11 IOPS, 20.33 MiB/s [2024-11-19T09:46:42.288Z] 5178.30 IOPS, 20.23 MiB/s 00:19:34.839 Latency(us) 00:19:34.839 [2024-11-19T09:46:42.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.839 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:34.839 Verification LBA range: start 0x0 length 0x2000 00:19:34.839 TLSTESTn1 : 10.02 5182.13 20.24 0.00 0.00 24663.56 4872.46 30773.43 00:19:34.839 [2024-11-19T09:46:42.288Z] =================================================================================================================== 00:19:34.839 [2024-11-19T09:46:42.288Z] Total : 5182.13 20.24 0.00 0.00 24663.56 4872.46 30773.43 00:19:34.839 { 00:19:34.839 "results": [ 00:19:34.839 { 00:19:34.839 "job": "TLSTESTn1", 00:19:34.839 "core_mask": "0x4", 00:19:34.839 "workload": "verify", 00:19:34.839 "status": "finished", 00:19:34.839 "verify_range": { 00:19:34.839 "start": 0, 00:19:34.839 "length": 8192 00:19:34.839 }, 00:19:34.839 "queue_depth": 128, 00:19:34.839 "io_size": 4096, 00:19:34.839 "runtime": 10.017108, 00:19:34.839 "iops": 5182.134404460849, 00:19:34.839 "mibps": 20.24271251742519, 00:19:34.839 "io_failed": 0, 00:19:34.839 "io_timeout": 0, 00:19:34.839 "avg_latency_us": 24663.56320548106, 00:19:34.839 "min_latency_us": 4872.459130434782, 00:19:34.839 "max_latency_us": 30773.426086956522 00:19:34.839 } 00:19:34.839 ], 00:19:34.839 "core_count": 1 00:19:34.839 } 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1709632 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1709632 ']' 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1709632 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709632 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709632' 00:19:34.839 killing process with pid 1709632 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1709632 00:19:34.839 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.839 00:19:34.839 Latency(us) 00:19:34.839 [2024-11-19T09:46:42.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.839 [2024-11-19T09:46:42.288Z] =================================================================================================================== 00:19:34.839 [2024-11-19T09:46:42.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.839 10:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1709632 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.NoflwE3gor 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NoflwE3gor 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NoflwE3gor 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NoflwE3gor 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NoflwE3gor 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1711326 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1711326 /var/tmp/bdevperf.sock 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1711326 ']' 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.839 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.839 [2024-11-19 10:46:42.158183] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:34.839 [2024-11-19 10:46:42.158232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711326 ] 00:19:34.839 [2024-11-19 10:46:42.236290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.839 [2024-11-19 10:46:42.278403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.098 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.098 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:35.098 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:35.098 [2024-11-19 10:46:42.544757] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NoflwE3gor': 0100666 00:19:35.098 [2024-11-19 10:46:42.544785] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:35.357 request: 00:19:35.357 { 00:19:35.357 "name": "key0", 00:19:35.357 "path": "/tmp/tmp.NoflwE3gor", 00:19:35.357 "method": "keyring_file_add_key", 00:19:35.357 "req_id": 1 00:19:35.357 } 00:19:35.357 Got JSON-RPC error response 00:19:35.357 response: 00:19:35.357 { 00:19:35.357 "code": -1, 00:19:35.357 "message": "Operation not permitted" 00:19:35.357 } 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.357 [2024-11-19 10:46:42.721291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.357 [2024-11-19 10:46:42.721317] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:35.357 request: 00:19:35.357 { 00:19:35.357 "name": "TLSTEST", 00:19:35.357 "trtype": "tcp", 00:19:35.357 "traddr": "10.0.0.2", 00:19:35.357 "adrfam": "ipv4", 00:19:35.357 "trsvcid": "4420", 00:19:35.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.357 "prchk_reftag": false, 00:19:35.357 "prchk_guard": false, 00:19:35.357 "hdgst": false, 00:19:35.357 "ddgst": false, 00:19:35.357 "psk": "key0", 00:19:35.357 "allow_unrecognized_csi": false, 00:19:35.357 "method": "bdev_nvme_attach_controller", 00:19:35.357 "req_id": 1 00:19:35.357 } 00:19:35.357 Got JSON-RPC error response 00:19:35.357 response: 00:19:35.357 { 00:19:35.357 "code": -126, 00:19:35.357 "message": "Required key not available" 00:19:35.357 } 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1711326 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1711326 ']' 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1711326 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1711326 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1711326' 00:19:35.357 killing process with pid 1711326 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1711326 00:19:35.357 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.357 00:19:35.357 Latency(us) 00:19:35.357 [2024-11-19T09:46:42.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.357 [2024-11-19T09:46:42.806Z] =================================================================================================================== 00:19:35.357 [2024-11-19T09:46:42.806Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.357 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1711326 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1709251 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1709251 ']' 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1709251 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709251 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709251' 00:19:35.616 killing process with pid 1709251 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1709251 00:19:35.616 10:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1709251 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1711561 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1711561 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1711561 ']' 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.875 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.875 [2024-11-19 10:46:43.221504] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:35.875 [2024-11-19 10:46:43.221553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.875 [2024-11-19 10:46:43.298100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.133 [2024-11-19 10:46:43.334295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.133 [2024-11-19 10:46:43.334328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.133 [2024-11-19 10:46:43.334335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.133 [2024-11-19 10:46:43.334341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.133 [2024-11-19 10:46:43.334346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.133 [2024-11-19 10:46:43.334913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.NoflwE3gor 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.NoflwE3gor 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.NoflwE3gor 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NoflwE3gor 00:19:36.133 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.392 [2024-11-19 10:46:43.658150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.392 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.651 10:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.651 [2024-11-19 10:46:44.055176] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.651 [2024-11-19 10:46:44.055385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.651 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.909 malloc0 00:19:36.909 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.168 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:37.426 [2024-11-19 10:46:44.636716] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NoflwE3gor': 0100666 00:19:37.426 [2024-11-19 10:46:44.636742] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:37.426 request: 00:19:37.426 { 00:19:37.426 "name": "key0", 00:19:37.426 "path": "/tmp/tmp.NoflwE3gor", 00:19:37.426 "method": "keyring_file_add_key", 00:19:37.426 "req_id": 1 00:19:37.426 } 00:19:37.426 Got JSON-RPC error response 00:19:37.426 response: 00:19:37.426 { 00:19:37.426 "code": -1, 00:19:37.426 "message": "Operation not permitted" 00:19:37.426 } 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.426 [2024-11-19 10:46:44.829232] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:37.426 [2024-11-19 10:46:44.829264] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:37.426 request: 00:19:37.426 { 00:19:37.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.426 "host": "nqn.2016-06.io.spdk:host1", 00:19:37.426 "psk": "key0", 00:19:37.426 "method": "nvmf_subsystem_add_host", 00:19:37.426 "req_id": 1 00:19:37.426 } 00:19:37.426 Got JSON-RPC error response 00:19:37.426 response: 00:19:37.426 { 00:19:37.426 "code": -32603, 00:19:37.426 "message": "Internal error" 00:19:37.426 } 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1711561 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1711561 ']' 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1711561 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.426 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1711561 00:19:37.685 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:37.685 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:37.685 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1711561' 00:19:37.685 killing process with pid 1711561 00:19:37.685 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1711561 00:19:37.685 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1711561 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.NoflwE3gor 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1711899 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1711899 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1711899 ']' 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.685 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.944 [2024-11-19 10:46:45.147321] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:37.944 [2024-11-19 10:46:45.147370] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.944 [2024-11-19 10:46:45.228458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.944 [2024-11-19 10:46:45.266985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.944 [2024-11-19 10:46:45.267020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.944 [2024-11-19 10:46:45.267027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.944 [2024-11-19 10:46:45.267033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.944 [2024-11-19 10:46:45.267038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.944 [2024-11-19 10:46:45.267576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.944 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.944 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:37.944 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.944 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.944 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.201 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.201 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.NoflwE3gor 00:19:38.201 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NoflwE3gor 00:19:38.201 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.201 [2024-11-19 10:46:45.574763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.201 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:38.459 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:38.717 [2024-11-19 10:46:45.967769] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.717 [2024-11-19 10:46:45.967961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.717 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:38.717 malloc0 00:19:38.975 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:38.975 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:39.233 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1712297 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1712297 /var/tmp/bdevperf.sock 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1712297 ']' 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.491 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.491 [2024-11-19 10:46:46.849898] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:39.491 [2024-11-19 10:46:46.849957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712297 ] 00:19:39.491 [2024-11-19 10:46:46.918822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.749 [2024-11-19 10:46:46.960136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.749 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.749 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.749 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:40.007 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.007 [2024-11-19 10:46:47.407185] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.264 TLSTESTn1 00:19:40.265 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:40.523 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:40.523 "subsystems": [ 00:19:40.523 { 00:19:40.523 "subsystem": "keyring", 00:19:40.523 "config": [ 00:19:40.523 { 00:19:40.523 "method": "keyring_file_add_key", 00:19:40.523 "params": { 00:19:40.523 "name": "key0", 00:19:40.523 "path": "/tmp/tmp.NoflwE3gor" 00:19:40.523 } 00:19:40.523 } 00:19:40.523 ] 00:19:40.523 }, 00:19:40.523 { 00:19:40.523 "subsystem": "iobuf", 00:19:40.523 "config": [ 00:19:40.523 { 00:19:40.523 "method": "iobuf_set_options", 00:19:40.523 "params": { 00:19:40.523 "small_pool_count": 8192, 00:19:40.523 "large_pool_count": 1024, 00:19:40.523 "small_bufsize": 8192, 00:19:40.523 "large_bufsize": 135168, 00:19:40.523 "enable_numa": false 00:19:40.523 } 00:19:40.523 } 00:19:40.523 ] 00:19:40.523 }, 00:19:40.523 { 00:19:40.523 "subsystem": "sock", 00:19:40.523 "config": [ 00:19:40.523 { 00:19:40.523 "method": "sock_set_default_impl", 00:19:40.523 "params": { 00:19:40.523 "impl_name": "posix" 00:19:40.523 } 00:19:40.523 }, 00:19:40.523 { 00:19:40.523 "method": "sock_impl_set_options", 00:19:40.523 "params": { 00:19:40.523 "impl_name": "ssl", 00:19:40.523 "recv_buf_size": 4096, 00:19:40.523 "send_buf_size": 4096, 00:19:40.523 "enable_recv_pipe": true, 00:19:40.523 "enable_quickack": false, 00:19:40.523 "enable_placement_id": 0, 00:19:40.523 "enable_zerocopy_send_server": true, 00:19:40.524 "enable_zerocopy_send_client": false, 00:19:40.524 "zerocopy_threshold": 0, 00:19:40.524 "tls_version": 0, 00:19:40.524 "enable_ktls": false 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "sock_impl_set_options", 00:19:40.524 "params": { 00:19:40.524 "impl_name": "posix", 00:19:40.524 "recv_buf_size": 2097152, 00:19:40.524 "send_buf_size": 2097152, 00:19:40.524 "enable_recv_pipe": true, 00:19:40.524 "enable_quickack": false, 00:19:40.524 "enable_placement_id": 0, 00:19:40.524 "enable_zerocopy_send_server": true, 00:19:40.524 "enable_zerocopy_send_client": false, 00:19:40.524 "zerocopy_threshold": 0, 00:19:40.524 "tls_version": 0, 00:19:40.524 "enable_ktls": false 00:19:40.524 } 00:19:40.524 } 00:19:40.524 ] 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "subsystem": "vmd", 00:19:40.524 "config": [] 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "subsystem": "accel", 00:19:40.524 "config": [ 00:19:40.524 { 00:19:40.524 "method": "accel_set_options", 00:19:40.524 "params": { 00:19:40.524 "small_cache_size": 128, 00:19:40.524 "large_cache_size": 16, 00:19:40.524 "task_count": 2048, 00:19:40.524 "sequence_count": 2048, 00:19:40.524 "buf_count": 2048 00:19:40.524 } 00:19:40.524 } 00:19:40.524 ] 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "subsystem": "bdev", 00:19:40.524 "config": [ 00:19:40.524 { 00:19:40.524 "method": "bdev_set_options", 00:19:40.524 "params": { 00:19:40.524 "bdev_io_pool_size": 65535, 00:19:40.524 "bdev_io_cache_size": 256, 00:19:40.524 "bdev_auto_examine": true, 00:19:40.524 "iobuf_small_cache_size": 128, 00:19:40.524 "iobuf_large_cache_size": 16 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "bdev_raid_set_options", 00:19:40.524 "params": { 00:19:40.524 "process_window_size_kb": 1024, 00:19:40.524 "process_max_bandwidth_mb_sec": 0 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "bdev_iscsi_set_options", 00:19:40.524 "params": { 00:19:40.524 "timeout_sec": 30 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "bdev_nvme_set_options", 00:19:40.524 "params": { 00:19:40.524 "action_on_timeout": "none", 00:19:40.524 "timeout_us": 0, 00:19:40.524 "timeout_admin_us": 0, 00:19:40.524 "keep_alive_timeout_ms": 10000, 00:19:40.524 "arbitration_burst": 0, 00:19:40.524 "low_priority_weight": 0, 00:19:40.524 "medium_priority_weight": 0, 00:19:40.524 "high_priority_weight": 0, 00:19:40.524 "nvme_adminq_poll_period_us": 10000, 00:19:40.524 "nvme_ioq_poll_period_us": 0, 00:19:40.524 "io_queue_requests": 0, 00:19:40.524 "delay_cmd_submit": true, 00:19:40.524 "transport_retry_count": 4, 00:19:40.524 "bdev_retry_count": 3, 00:19:40.524 "transport_ack_timeout": 0, 00:19:40.524 "ctrlr_loss_timeout_sec": 0, 00:19:40.524 "reconnect_delay_sec": 0, 00:19:40.524 "fast_io_fail_timeout_sec": 0, 00:19:40.524 "disable_auto_failback": false, 00:19:40.524 "generate_uuids": false, 00:19:40.524 "transport_tos": 0, 00:19:40.524 "nvme_error_stat": false, 00:19:40.524 "rdma_srq_size": 0, 00:19:40.524 "io_path_stat": false, 00:19:40.524 "allow_accel_sequence": false, 00:19:40.524 "rdma_max_cq_size": 0, 00:19:40.524 "rdma_cm_event_timeout_ms": 0, 00:19:40.524 "dhchap_digests": [ 00:19:40.524 "sha256", 00:19:40.524 "sha384", 00:19:40.524 "sha512" 00:19:40.524 ], 00:19:40.524 "dhchap_dhgroups": [ 00:19:40.524 "null", 00:19:40.524 "ffdhe2048", 00:19:40.524 "ffdhe3072", 00:19:40.524 "ffdhe4096", 00:19:40.524 "ffdhe6144", 00:19:40.524 "ffdhe8192" 00:19:40.524 ] 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "bdev_nvme_set_hotplug", 00:19:40.524 "params": { 00:19:40.524 "period_us": 100000, 00:19:40.524 "enable": false 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "bdev_malloc_create", 00:19:40.524 "params": { 00:19:40.524 "name": "malloc0", 00:19:40.524 "num_blocks": 8192, 00:19:40.524 "block_size": 4096, 00:19:40.524 "physical_block_size": 4096, 00:19:40.524 "uuid": "1b413939-6581-4af0-a16f-a56c3b79475c", 00:19:40.524 "optimal_io_boundary": 0, 00:19:40.524 "md_size": 0, 00:19:40.524 "dif_type": 0, 00:19:40.524 "dif_is_head_of_md": false, 00:19:40.524 "dif_pi_format": 0 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "bdev_wait_for_examine" 00:19:40.524 } 00:19:40.524 ] 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "subsystem": "nbd", 00:19:40.524 "config": [] 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "subsystem": "scheduler", 00:19:40.524 "config": [ 00:19:40.524 { 00:19:40.524 "method": "framework_set_scheduler", 00:19:40.524 "params": { 00:19:40.524 "name": "static" 00:19:40.524 } 00:19:40.524 } 00:19:40.524 ] 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "subsystem": "nvmf", 00:19:40.524 "config": [ 00:19:40.524 { 00:19:40.524 "method": "nvmf_set_config", 00:19:40.524 "params": { 00:19:40.524 "discovery_filter": "match_any", 00:19:40.524 "admin_cmd_passthru": { 00:19:40.524 "identify_ctrlr": false 00:19:40.524 }, 00:19:40.524 "dhchap_digests": [ 00:19:40.524 "sha256", 00:19:40.524 "sha384", 00:19:40.524 "sha512" 00:19:40.524 ], 00:19:40.524 "dhchap_dhgroups": [ 00:19:40.524 "null", 00:19:40.524 "ffdhe2048", 00:19:40.524 "ffdhe3072", 00:19:40.524 "ffdhe4096", 00:19:40.524 "ffdhe6144", 00:19:40.524 "ffdhe8192" 00:19:40.524 ] 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "nvmf_set_max_subsystems", 00:19:40.524 "params": { 00:19:40.524 "max_subsystems": 1024 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "nvmf_set_crdt", 00:19:40.524 "params": { 00:19:40.524 "crdt1": 0, 00:19:40.524 "crdt2": 0, 00:19:40.524 "crdt3": 0 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "nvmf_create_transport", 00:19:40.524 "params": { 00:19:40.524 "trtype": "TCP", 00:19:40.524 "max_queue_depth": 128, 00:19:40.524 "max_io_qpairs_per_ctrlr": 127, 00:19:40.524 "in_capsule_data_size": 4096, 00:19:40.524 "max_io_size": 131072, 00:19:40.524 "io_unit_size": 131072, 00:19:40.524 "max_aq_depth": 128, 00:19:40.524 "num_shared_buffers": 511, 00:19:40.524 "buf_cache_size": 4294967295, 00:19:40.524 "dif_insert_or_strip": false, 00:19:40.524 "zcopy": false, 00:19:40.524 "c2h_success": false, 00:19:40.524 "sock_priority": 0, 00:19:40.524 "abort_timeout_sec": 1, 00:19:40.524 "ack_timeout": 0, 00:19:40.524 "data_wr_pool_size": 0 00:19:40.524 } 00:19:40.524 }, 00:19:40.524 { 00:19:40.524 "method": "nvmf_create_subsystem", 00:19:40.524 "params": { 00:19:40.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.524 "allow_any_host": false, 00:19:40.525 "serial_number": "SPDK00000000000001", 00:19:40.525 "model_number": "SPDK bdev Controller", 00:19:40.525 "max_namespaces": 10, 00:19:40.525 "min_cntlid": 1, 00:19:40.525 "max_cntlid": 65519, 00:19:40.525 "ana_reporting": false 00:19:40.525 } 00:19:40.525 }, 00:19:40.525 { 00:19:40.525 "method": "nvmf_subsystem_add_host", 00:19:40.525 "params": { 00:19:40.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.525 "host": "nqn.2016-06.io.spdk:host1", 00:19:40.525 "psk": "key0" 00:19:40.525 } 00:19:40.525 }, 00:19:40.525 { 00:19:40.525 "method": "nvmf_subsystem_add_ns", 00:19:40.525 "params": { 00:19:40.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.525 "namespace": { 00:19:40.525 "nsid": 1, 00:19:40.525 "bdev_name": "malloc0", 00:19:40.525 "nguid": "1B41393965814AF0A16FA56C3B79475C", 00:19:40.525 "uuid": "1b413939-6581-4af0-a16f-a56c3b79475c", 00:19:40.525 "no_auto_visible": false 00:19:40.525 } 00:19:40.525 } 00:19:40.525 }, 00:19:40.525 { 00:19:40.525 "method": "nvmf_subsystem_add_listener", 00:19:40.525 "params": { 00:19:40.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.525 "listen_address": { 00:19:40.525 "trtype": "TCP", 00:19:40.525 "adrfam": "IPv4", 00:19:40.525 "traddr": "10.0.0.2", 00:19:40.525 "trsvcid": "4420" 00:19:40.525 }, 00:19:40.525 "secure_channel": true 00:19:40.525 } 00:19:40.525 } 00:19:40.525 ] 00:19:40.525 } 00:19:40.525 ] 00:19:40.525 }' 00:19:40.525 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:40.783 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:40.783 "subsystems": [ 00:19:40.783 { 00:19:40.783 "subsystem": "keyring", 00:19:40.783 "config": [ 00:19:40.783 { 00:19:40.783 "method": "keyring_file_add_key", 00:19:40.783 "params": { 00:19:40.783 "name": "key0", 00:19:40.783 "path": "/tmp/tmp.NoflwE3gor" 00:19:40.783 } 00:19:40.783 } 00:19:40.783 ] 00:19:40.783 }, 00:19:40.783 { 00:19:40.783 "subsystem": "iobuf", 00:19:40.783 "config": [ 00:19:40.783 { 00:19:40.783 "method": "iobuf_set_options", 00:19:40.783 "params": { 00:19:40.783 "small_pool_count": 8192, 00:19:40.783 "large_pool_count": 1024, 00:19:40.783 "small_bufsize": 8192, 00:19:40.783 "large_bufsize": 135168, 00:19:40.783 "enable_numa": false 00:19:40.783 } 00:19:40.783 } 00:19:40.783 ] 00:19:40.783 }, 00:19:40.783 { 00:19:40.783 "subsystem": "sock", 00:19:40.783 "config": [ 00:19:40.783 { 00:19:40.783 "method": "sock_set_default_impl", 00:19:40.783 "params": { 00:19:40.783 "impl_name": "posix" 00:19:40.783 } 00:19:40.783 }, 00:19:40.783 { 00:19:40.783 "method": "sock_impl_set_options", 00:19:40.783 "params": { 00:19:40.783 "impl_name": "ssl", 00:19:40.783 "recv_buf_size": 4096, 00:19:40.783 "send_buf_size": 4096, 00:19:40.783 "enable_recv_pipe": true, 00:19:40.783 "enable_quickack": false, 00:19:40.783 "enable_placement_id": 0, 00:19:40.783 "enable_zerocopy_send_server": true, 00:19:40.783 "enable_zerocopy_send_client": false, 00:19:40.783 "zerocopy_threshold": 0, 00:19:40.783 "tls_version": 0, 00:19:40.783 "enable_ktls": false 00:19:40.784 } 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "method": "sock_impl_set_options", 00:19:40.784 "params": { 00:19:40.784 "impl_name": "posix", 00:19:40.784 "recv_buf_size": 2097152, 00:19:40.784 "send_buf_size": 2097152, 00:19:40.784 "enable_recv_pipe": true, 00:19:40.784 "enable_quickack": false, 00:19:40.784 "enable_placement_id": 0, 00:19:40.784 "enable_zerocopy_send_server": true, 00:19:40.784 "enable_zerocopy_send_client": false, 00:19:40.784 "zerocopy_threshold": 0, 00:19:40.784 "tls_version": 0, 00:19:40.784 "enable_ktls": false 00:19:40.784 } 00:19:40.784 } 00:19:40.784 ] 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "subsystem": "vmd", 00:19:40.784 "config": [] 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "subsystem": "accel", 00:19:40.784 "config": [ 00:19:40.784 { 00:19:40.784 "method": "accel_set_options", 00:19:40.784 "params": { 00:19:40.784 "small_cache_size": 128, 00:19:40.784 "large_cache_size": 16, 00:19:40.784 "task_count": 2048, 00:19:40.784 "sequence_count": 2048, 00:19:40.784 "buf_count": 2048 00:19:40.784 } 00:19:40.784 } 00:19:40.784 ] 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "subsystem": "bdev", 00:19:40.784 "config": [ 00:19:40.784 { 00:19:40.784 "method": "bdev_set_options", 00:19:40.784 "params": { 00:19:40.784 "bdev_io_pool_size": 65535, 00:19:40.784 "bdev_io_cache_size": 256, 00:19:40.784 "bdev_auto_examine": true, 00:19:40.784 "iobuf_small_cache_size": 128, 00:19:40.784 "iobuf_large_cache_size": 16 00:19:40.784 } 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "method": "bdev_raid_set_options", 00:19:40.784 "params": { 00:19:40.784 "process_window_size_kb": 1024, 00:19:40.784 "process_max_bandwidth_mb_sec": 0 00:19:40.784 } 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "method": "bdev_iscsi_set_options", 00:19:40.784 "params": { 00:19:40.784 "timeout_sec": 30 00:19:40.784 } 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "method": "bdev_nvme_set_options", 00:19:40.784 "params": { 00:19:40.784 "action_on_timeout": "none", 00:19:40.784 "timeout_us": 0, 00:19:40.784 "timeout_admin_us": 0, 00:19:40.784 "keep_alive_timeout_ms": 10000, 00:19:40.784 "arbitration_burst": 0, 00:19:40.784 "low_priority_weight": 0, 00:19:40.784 "medium_priority_weight": 0, 00:19:40.784 "high_priority_weight": 0, 00:19:40.784 "nvme_adminq_poll_period_us": 10000, 00:19:40.784 "nvme_ioq_poll_period_us": 0, 00:19:40.784 "io_queue_requests": 512, 00:19:40.784 "delay_cmd_submit": true, 00:19:40.784 "transport_retry_count": 4, 00:19:40.784 "bdev_retry_count": 3, 00:19:40.784 "transport_ack_timeout": 0, 00:19:40.784 "ctrlr_loss_timeout_sec": 0, 00:19:40.784 "reconnect_delay_sec": 0, 00:19:40.784 "fast_io_fail_timeout_sec": 0, 00:19:40.784 "disable_auto_failback": false, 00:19:40.784 "generate_uuids": false, 00:19:40.784 "transport_tos": 0, 00:19:40.784 "nvme_error_stat": false, 00:19:40.784 "rdma_srq_size": 0, 00:19:40.784 "io_path_stat": false, 00:19:40.784 "allow_accel_sequence": false, 00:19:40.784 "rdma_max_cq_size": 0, 00:19:40.784 "rdma_cm_event_timeout_ms": 0, 00:19:40.784 "dhchap_digests": [ 00:19:40.784 "sha256", 00:19:40.784 "sha384", 00:19:40.784 "sha512" 00:19:40.784 ], 00:19:40.784 "dhchap_dhgroups": [ 00:19:40.784 "null", 00:19:40.784 "ffdhe2048", 00:19:40.784 "ffdhe3072", 00:19:40.784 "ffdhe4096", 00:19:40.784 "ffdhe6144", 00:19:40.784 "ffdhe8192" 00:19:40.784 ] 00:19:40.784 } 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "method": "bdev_nvme_attach_controller", 00:19:40.784 "params": { 00:19:40.784 "name": "TLSTEST", 00:19:40.784 "trtype": "TCP", 00:19:40.784 "adrfam": "IPv4", 00:19:40.784 "traddr": "10.0.0.2", 00:19:40.784 "trsvcid": "4420", 00:19:40.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.784 "prchk_reftag": false, 00:19:40.784 "prchk_guard": false, 00:19:40.784 "ctrlr_loss_timeout_sec": 0, 00:19:40.784 "reconnect_delay_sec": 0, 00:19:40.784 "fast_io_fail_timeout_sec": 0, 00:19:40.784 "psk": "key0", 00:19:40.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.784 "hdgst": false, 00:19:40.784 "ddgst": false, 00:19:40.784 "multipath": "multipath" 00:19:40.784 } 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "method": "bdev_nvme_set_hotplug", 00:19:40.784 "params": { 00:19:40.784 "period_us": 100000, 00:19:40.784 "enable": false 00:19:40.784 } 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "method": "bdev_wait_for_examine" 00:19:40.784 } 00:19:40.784 ] 00:19:40.784 }, 00:19:40.784 { 00:19:40.784 "subsystem": "nbd", 00:19:40.784 "config": [] 00:19:40.784 } 00:19:40.784 ] 00:19:40.784 }' 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1712297 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1712297 ']' 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1712297 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1712297 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1712297' 00:19:40.784 killing process with pid 1712297 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1712297 00:19:40.784 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.784 00:19:40.784 Latency(us) 00:19:40.784 [2024-11-19T09:46:48.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.784 [2024-11-19T09:46:48.233Z] =================================================================================================================== 00:19:40.784 [2024-11-19T09:46:48.233Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.784 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1712297 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1711899 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1711899 ']' 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1711899 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1711899 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1711899' 00:19:41.043 killing process with pid 1711899 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1711899 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1711899 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.043 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:41.043 "subsystems": [ 00:19:41.043 { 00:19:41.043 "subsystem": "keyring", 00:19:41.043 "config": [ 00:19:41.043 { 00:19:41.043 "method": "keyring_file_add_key", 00:19:41.043 "params": { 00:19:41.043 "name": "key0", 00:19:41.043 "path": "/tmp/tmp.NoflwE3gor" 00:19:41.043 } 00:19:41.043 } 00:19:41.043 ] 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "subsystem": "iobuf", 00:19:41.043 "config": [ 00:19:41.043 { 00:19:41.043 "method": "iobuf_set_options", 00:19:41.043 "params": { 00:19:41.043 "small_pool_count": 8192, 00:19:41.043 "large_pool_count": 1024, 00:19:41.043 "small_bufsize": 8192, 00:19:41.043 "large_bufsize": 135168, 00:19:41.043 "enable_numa": false 00:19:41.043 } 00:19:41.043 } 00:19:41.043 ] 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "subsystem": "sock", 00:19:41.043 "config": [ 00:19:41.043 { 00:19:41.043 "method": "sock_set_default_impl", 00:19:41.043 "params": { 00:19:41.043 "impl_name": "posix" 00:19:41.043 } 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "method": "sock_impl_set_options", 00:19:41.043 "params": { 00:19:41.043 "impl_name": "ssl", 00:19:41.043 "recv_buf_size": 4096, 00:19:41.043 "send_buf_size": 4096, 00:19:41.043 "enable_recv_pipe": true, 00:19:41.043 "enable_quickack": false, 00:19:41.043 "enable_placement_id": 0, 00:19:41.043 "enable_zerocopy_send_server": true, 00:19:41.043 "enable_zerocopy_send_client": false, 00:19:41.043 "zerocopy_threshold": 0, 00:19:41.043 "tls_version": 0, 00:19:41.043 "enable_ktls": false 00:19:41.043 } 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "method": "sock_impl_set_options", 00:19:41.043 "params": { 00:19:41.043 "impl_name": "posix", 00:19:41.043 "recv_buf_size": 2097152, 00:19:41.043 "send_buf_size": 2097152, 00:19:41.043 "enable_recv_pipe": true, 00:19:41.043 "enable_quickack": false, 00:19:41.043 "enable_placement_id": 0, 00:19:41.043 "enable_zerocopy_send_server": true, 00:19:41.043 "enable_zerocopy_send_client": false, 00:19:41.043 "zerocopy_threshold": 0, 00:19:41.043 "tls_version": 0, 00:19:41.043 "enable_ktls": false 00:19:41.043 } 00:19:41.043 } 00:19:41.043 ] 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "subsystem": "vmd", 00:19:41.043 "config": [] 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "subsystem": "accel", 00:19:41.043 "config": [ 00:19:41.043 { 00:19:41.043 "method": "accel_set_options", 00:19:41.043 "params": { 00:19:41.043 "small_cache_size": 128, 00:19:41.043 "large_cache_size": 16, 00:19:41.043 "task_count": 2048, 00:19:41.043 "sequence_count": 2048, 00:19:41.043 "buf_count": 2048 00:19:41.043 } 00:19:41.043 } 00:19:41.043 ] 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "subsystem": "bdev", 00:19:41.043 "config": [ 00:19:41.043 { 00:19:41.043 "method": "bdev_set_options", 00:19:41.043 "params": { 00:19:41.043 "bdev_io_pool_size": 65535, 00:19:41.043 "bdev_io_cache_size": 256, 00:19:41.043 "bdev_auto_examine": true, 00:19:41.043 "iobuf_small_cache_size": 128, 00:19:41.043 "iobuf_large_cache_size": 16 00:19:41.043 } 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "method": "bdev_raid_set_options", 00:19:41.043 "params": { 00:19:41.043 "process_window_size_kb": 1024, 00:19:41.043 "process_max_bandwidth_mb_sec": 0 00:19:41.043 } 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "method": "bdev_iscsi_set_options", 00:19:41.043 "params": { 00:19:41.043 "timeout_sec": 30 00:19:41.043 } 00:19:41.043 }, 00:19:41.043 { 00:19:41.043 "method": "bdev_nvme_set_options", 00:19:41.043 "params": { 00:19:41.043 "action_on_timeout": "none", 00:19:41.043 "timeout_us": 0, 00:19:41.043 "timeout_admin_us": 0, 00:19:41.043 "keep_alive_timeout_ms": 10000, 00:19:41.043 "arbitration_burst": 0, 00:19:41.043 "low_priority_weight": 0, 00:19:41.043 "medium_priority_weight": 0, 00:19:41.043 "high_priority_weight": 0, 00:19:41.043 "nvme_adminq_poll_period_us": 10000, 00:19:41.043 "nvme_ioq_poll_period_us": 0, 00:19:41.043 "io_queue_requests": 0, 00:19:41.044 "delay_cmd_submit": true, 00:19:41.044 "transport_retry_count": 4, 00:19:41.044 "bdev_retry_count": 3, 00:19:41.044 "transport_ack_timeout": 0, 00:19:41.044 "ctrlr_loss_timeout_sec": 0, 00:19:41.044 "reconnect_delay_sec": 0, 00:19:41.044 "fast_io_fail_timeout_sec": 0, 00:19:41.044 "disable_auto_failback": false, 00:19:41.044 "generate_uuids": false, 00:19:41.044 "transport_tos": 0, 00:19:41.044 "nvme_error_stat": false, 00:19:41.044 "rdma_srq_size": 0, 00:19:41.044 "io_path_stat": false, 00:19:41.044 "allow_accel_sequence": false, 00:19:41.044 "rdma_max_cq_size": 0, 00:19:41.044 "rdma_cm_event_timeout_ms": 0, 00:19:41.044 "dhchap_digests": [ 00:19:41.044 "sha256", 00:19:41.044 "sha384", 00:19:41.044 "sha512" 00:19:41.044 ], 00:19:41.044 "dhchap_dhgroups": [ 00:19:41.044 "null", 00:19:41.044 "ffdhe2048", 00:19:41.044 "ffdhe3072", 00:19:41.044 "ffdhe4096", 00:19:41.044 "ffdhe6144", 00:19:41.044 "ffdhe8192" 00:19:41.044 ] 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "bdev_nvme_set_hotplug", 00:19:41.044 "params": { 00:19:41.044 "period_us": 100000, 00:19:41.044 "enable": false 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "bdev_malloc_create", 00:19:41.044 "params": { 00:19:41.044 "name": "malloc0", 00:19:41.044 "num_blocks": 8192, 00:19:41.044 "block_size": 4096, 00:19:41.044 "physical_block_size": 4096, 00:19:41.044 "uuid": "1b413939-6581-4af0-a16f-a56c3b79475c", 00:19:41.044 "optimal_io_boundary": 0, 00:19:41.044 "md_size": 0, 00:19:41.044 "dif_type": 0, 00:19:41.044 "dif_is_head_of_md": false, 00:19:41.044 "dif_pi_format": 0 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "bdev_wait_for_examine" 00:19:41.044 } 00:19:41.044 ] 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "subsystem": "nbd", 00:19:41.044 "config": [] 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "subsystem": "scheduler", 00:19:41.044 "config": [ 00:19:41.044 { 00:19:41.044 "method": "framework_set_scheduler", 00:19:41.044 "params": { 00:19:41.044 "name": "static" 00:19:41.044 } 00:19:41.044 } 00:19:41.044 ] 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "subsystem": "nvmf", 00:19:41.044 "config": [ 00:19:41.044 { 00:19:41.044 "method": "nvmf_set_config", 00:19:41.044 "params": { 00:19:41.044 "discovery_filter": "match_any", 00:19:41.044 "admin_cmd_passthru": { 00:19:41.044 "identify_ctrlr": false 00:19:41.044 }, 00:19:41.044 "dhchap_digests": [ 00:19:41.044 "sha256", 00:19:41.044 "sha384", 00:19:41.044 "sha512" 00:19:41.044 ], 00:19:41.044 "dhchap_dhgroups": [ 00:19:41.044 "null", 00:19:41.044 "ffdhe2048", 00:19:41.044 "ffdhe3072", 00:19:41.044 "ffdhe4096", 00:19:41.044 "ffdhe6144", 00:19:41.044 "ffdhe8192" 00:19:41.044 ] 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "nvmf_set_max_subsystems", 00:19:41.044 "params": { 00:19:41.044 "max_subsystems": 1024 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "nvmf_set_crdt", 00:19:41.044 "params": { 00:19:41.044 "crdt1": 0, 00:19:41.044 "crdt2": 0, 00:19:41.044 "crdt3": 0 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "nvmf_create_transport", 00:19:41.044 "params": { 00:19:41.044 "trtype": "TCP", 00:19:41.044 "max_queue_depth": 128, 00:19:41.044 "max_io_qpairs_per_ctrlr": 127, 00:19:41.044 "in_capsule_data_size": 4096, 00:19:41.044 "max_io_size": 131072, 00:19:41.044 "io_unit_size": 131072, 00:19:41.044 "max_aq_depth": 128, 00:19:41.044 "num_shared_buffers": 511, 00:19:41.044 "buf_cache_size": 4294967295, 00:19:41.044 "dif_insert_or_strip": false, 00:19:41.044 "zcopy": false, 00:19:41.044 "c2h_success": false, 00:19:41.044 "sock_priority": 0, 00:19:41.044 "abort_timeout_sec": 1, 00:19:41.044 "ack_timeout": 0, 00:19:41.044 "data_wr_pool_size": 0 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "nvmf_create_subsystem", 00:19:41.044 "params": { 00:19:41.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.044 "allow_any_host": false, 00:19:41.044 "serial_number": "SPDK00000000000001", 00:19:41.044 "model_number": "SPDK bdev Controller", 00:19:41.044 "max_namespaces": 10, 00:19:41.044 "min_cntlid": 1, 00:19:41.044 "max_cntlid": 65519, 00:19:41.044 "ana_reporting": false 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "nvmf_subsystem_add_host", 00:19:41.044 "params": { 00:19:41.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.044 "host": "nqn.2016-06.io.spdk:host1", 00:19:41.044 "psk": "key0" 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "nvmf_subsystem_add_ns", 00:19:41.044 "params": { 00:19:41.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.044 "namespace": { 00:19:41.044 "nsid": 1, 00:19:41.044 "bdev_name": "malloc0", 00:19:41.044 "nguid": "1B41393965814AF0A16FA56C3B79475C", 00:19:41.044 "uuid": "1b413939-6581-4af0-a16f-a56c3b79475c", 00:19:41.044 "no_auto_visible": false 00:19:41.044 } 00:19:41.044 } 00:19:41.044 }, 00:19:41.044 { 00:19:41.044 "method": "nvmf_subsystem_add_listener", 00:19:41.044 "params": { 00:19:41.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.044 "listen_address": { 00:19:41.044 "trtype": "TCP", 00:19:41.044 "adrfam": "IPv4", 00:19:41.044 "traddr": "10.0.0.2", 00:19:41.044 "trsvcid": "4420" 00:19:41.044 }, 00:19:41.044 "secure_channel": true 00:19:41.044 } 00:19:41.044 } 00:19:41.044 ] 00:19:41.044 } 00:19:41.044 ] 00:19:41.044 }' 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1712551 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1712551 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1712551 ']' 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.044 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.354 [2024-11-19 10:46:48.534063] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:41.354 [2024-11-19 10:46:48.534110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.354 [2024-11-19 10:46:48.613290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.354 [2024-11-19 10:46:48.653749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.354 [2024-11-19 10:46:48.653784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.354 [2024-11-19 10:46:48.653791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.354 [2024-11-19 10:46:48.653797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.354 [2024-11-19 10:46:48.653802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.354 [2024-11-19 10:46:48.654407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.771 [2024-11-19 10:46:48.868640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.771 [2024-11-19 10:46:48.900678] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.771 [2024-11-19 10:46:48.900864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1712705 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1712705 /var/tmp/bdevperf.sock 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1712705 ']' 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.029 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:42.029 "subsystems": [ 00:19:42.029 { 00:19:42.029 "subsystem": "keyring", 00:19:42.029 "config": [ 00:19:42.029 { 00:19:42.029 "method": "keyring_file_add_key", 00:19:42.029 "params": { 00:19:42.029 "name": "key0", 00:19:42.029 "path": "/tmp/tmp.NoflwE3gor" 00:19:42.029 } 00:19:42.029 } 00:19:42.029 ] 00:19:42.029 }, 00:19:42.029 { 00:19:42.029 "subsystem": "iobuf", 00:19:42.029 "config": [ 00:19:42.029 { 00:19:42.029 "method": "iobuf_set_options", 00:19:42.029 "params": { 00:19:42.029 "small_pool_count": 8192, 00:19:42.029 "large_pool_count": 1024, 00:19:42.029 "small_bufsize": 8192, 00:19:42.029 "large_bufsize": 135168, 00:19:42.029 "enable_numa": false 00:19:42.029 } 00:19:42.029 } 00:19:42.029 ] 00:19:42.029 }, 00:19:42.029 { 00:19:42.029 "subsystem": "sock", 00:19:42.029 "config": [ 00:19:42.029 { 00:19:42.029 "method": "sock_set_default_impl", 00:19:42.029 "params": { 00:19:42.029 "impl_name": "posix" 00:19:42.029 } 00:19:42.029 }, 00:19:42.029 { 00:19:42.029 "method": "sock_impl_set_options", 00:19:42.029 "params": { 00:19:42.029 "impl_name": "ssl", 00:19:42.029 "recv_buf_size": 4096, 00:19:42.029 "send_buf_size": 4096, 00:19:42.029 "enable_recv_pipe": true, 00:19:42.029 "enable_quickack": false, 00:19:42.029 "enable_placement_id": 0, 00:19:42.029 "enable_zerocopy_send_server": true, 00:19:42.029 "enable_zerocopy_send_client": false, 00:19:42.029 "zerocopy_threshold": 0, 00:19:42.029 "tls_version": 0, 00:19:42.029 "enable_ktls": false 00:19:42.029 } 00:19:42.029 }, 00:19:42.029 { 00:19:42.029 "method": "sock_impl_set_options", 00:19:42.029 "params": { 00:19:42.029 "impl_name": "posix", 00:19:42.029 "recv_buf_size": 2097152, 00:19:42.029 "send_buf_size": 2097152, 00:19:42.029 "enable_recv_pipe": true, 00:19:42.029 "enable_quickack": false, 00:19:42.029 "enable_placement_id": 0, 00:19:42.029 "enable_zerocopy_send_server": true, 00:19:42.029 "enable_zerocopy_send_client": false, 00:19:42.029 "zerocopy_threshold": 0, 00:19:42.029 "tls_version": 0, 00:19:42.029 "enable_ktls": false 00:19:42.029 } 00:19:42.029 } 00:19:42.029 ] 00:19:42.029 }, 00:19:42.029 { 00:19:42.029 "subsystem": "vmd", 00:19:42.029 "config": [] 00:19:42.029 }, 00:19:42.029 { 00:19:42.029 "subsystem": "accel", 00:19:42.029 "config": [ 00:19:42.029 { 00:19:42.029 "method": "accel_set_options", 00:19:42.029 "params": { 00:19:42.029 "small_cache_size": 128, 00:19:42.029 "large_cache_size": 16, 00:19:42.029 "task_count": 2048, 00:19:42.029 "sequence_count": 2048, 00:19:42.029 "buf_count": 2048 00:19:42.029 } 00:19:42.029 } 00:19:42.029 ] 00:19:42.029 }, 00:19:42.029 { 00:19:42.029 "subsystem": "bdev", 00:19:42.029 "config": [ 00:19:42.029 { 00:19:42.030 "method": "bdev_set_options", 00:19:42.030 "params": { 00:19:42.030 "bdev_io_pool_size": 65535, 00:19:42.030 "bdev_io_cache_size": 256, 00:19:42.030 "bdev_auto_examine": true, 00:19:42.030 "iobuf_small_cache_size": 128, 00:19:42.030 "iobuf_large_cache_size": 16 00:19:42.030 } 00:19:42.030 }, 00:19:42.030 { 00:19:42.030 "method": "bdev_raid_set_options", 00:19:42.030 "params": { 00:19:42.030 "process_window_size_kb": 1024, 00:19:42.030 "process_max_bandwidth_mb_sec": 0 00:19:42.030 } 00:19:42.030 }, 00:19:42.030 { 00:19:42.030 "method": "bdev_iscsi_set_options", 00:19:42.030 "params": { 00:19:42.030 "timeout_sec": 30 00:19:42.030 } 00:19:42.030 }, 00:19:42.030 { 00:19:42.030 "method": "bdev_nvme_set_options", 00:19:42.030 "params": { 00:19:42.030 "action_on_timeout": "none", 00:19:42.030 "timeout_us": 0, 00:19:42.030 "timeout_admin_us": 0, 00:19:42.030 "keep_alive_timeout_ms": 10000, 00:19:42.030 "arbitration_burst": 0, 00:19:42.030 "low_priority_weight": 0, 00:19:42.030 "medium_priority_weight": 0, 00:19:42.030 "high_priority_weight": 0, 00:19:42.030 "nvme_adminq_poll_period_us": 10000, 00:19:42.030 "nvme_ioq_poll_period_us": 0, 00:19:42.030 "io_queue_requests": 512, 00:19:42.030 "delay_cmd_submit": true, 00:19:42.030 "transport_retry_count": 4, 00:19:42.030 "bdev_retry_count": 3, 00:19:42.030 "transport_ack_timeout": 0, 00:19:42.030 "ctrlr_loss_timeout_sec": 0, 00:19:42.030 "reconnect_delay_sec": 0, 00:19:42.030 "fast_io_fail_timeout_sec": 0, 00:19:42.030 "disable_auto_failback": false, 00:19:42.030 "generate_uuids": false, 00:19:42.030 "transport_tos": 0, 00:19:42.030 "nvme_error_stat": false, 00:19:42.030 "rdma_srq_size": 0, 00:19:42.030 "io_path_stat": false, 00:19:42.030 "allow_accel_sequence": false, 00:19:42.030 "rdma_max_cq_size": 0, 00:19:42.030 "rdma_cm_event_timeout_ms": 0, 00:19:42.030 "dhchap_digests": [ 00:19:42.030 "sha256", 00:19:42.030 "sha384", 00:19:42.030 "sha512" 00:19:42.030 ], 00:19:42.030 "dhchap_dhgroups": [ 00:19:42.030 "null", 00:19:42.030 "ffdhe2048", 00:19:42.030 "ffdhe3072", 00:19:42.030 "ffdhe4096", 00:19:42.030 "ffdhe6144", 00:19:42.030 "ffdhe8192" 00:19:42.030 ] 00:19:42.030 } 00:19:42.030 }, 00:19:42.030 { 00:19:42.030 "method": "bdev_nvme_attach_controller", 00:19:42.030 "params": { 00:19:42.030 "name": "TLSTEST", 00:19:42.030 "trtype": "TCP", 00:19:42.030 "adrfam": "IPv4", 00:19:42.030 "traddr": "10.0.0.2", 00:19:42.030 "trsvcid": "4420", 00:19:42.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.030 "prchk_reftag": false, 00:19:42.030 "prchk_guard": false, 00:19:42.030 "ctrlr_loss_timeout_sec": 0, 00:19:42.030 "reconnect_delay_sec": 0, 00:19:42.030 "fast_io_fail_timeout_sec": 0, 00:19:42.030 "psk": "key0", 00:19:42.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.030 "hdgst": false, 00:19:42.030 "ddgst": false, 00:19:42.030 "multipath": "multipath" 00:19:42.030 } 00:19:42.030 }, 00:19:42.030 { 00:19:42.030 "method": "bdev_nvme_set_hotplug", 00:19:42.030 "params": { 00:19:42.030 "period_us": 100000, 00:19:42.030 "enable": false 00:19:42.030 } 00:19:42.030 }, 00:19:42.030 { 00:19:42.030 "method": "bdev_wait_for_examine" 00:19:42.030 } 00:19:42.030 ] 00:19:42.030 }, 00:19:42.030 { 00:19:42.030 "subsystem": "nbd", 00:19:42.030 "config": [] 00:19:42.030 } 00:19:42.030 ] 00:19:42.030 }' 00:19:42.030 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.030 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.030 [2024-11-19 10:46:49.449469] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:42.030 [2024-11-19 10:46:49.449517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712705 ] 00:19:42.288 [2024-11-19 10:46:49.507717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.288 [2024-11-19 10:46:49.550559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.288 [2024-11-19 10:46:49.703209] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.854 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.854 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.854 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:43.112 Running I/O for 10 seconds... 00:19:44.983 5348.00 IOPS, 20.89 MiB/s [2024-11-19T09:46:53.805Z] 5384.00 IOPS, 21.03 MiB/s [2024-11-19T09:46:54.741Z] 5406.00 IOPS, 21.12 MiB/s [2024-11-19T09:46:55.676Z] 5374.25 IOPS, 20.99 MiB/s [2024-11-19T09:46:56.610Z] 5374.20 IOPS, 20.99 MiB/s [2024-11-19T09:46:57.545Z] 5387.67 IOPS, 21.05 MiB/s [2024-11-19T09:46:58.489Z] 5388.29 IOPS, 21.05 MiB/s [2024-11-19T09:46:59.424Z] 5403.50 IOPS, 21.11 MiB/s [2024-11-19T09:47:00.801Z] 5396.78 IOPS, 21.08 MiB/s [2024-11-19T09:47:00.801Z] 5394.20 IOPS, 21.07 MiB/s 00:19:53.352 Latency(us) 00:19:53.352 [2024-11-19T09:47:00.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.353 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.353 Verification LBA range: start 0x0 length 0x2000 00:19:53.353 TLSTESTn1 : 10.01 5400.05 21.09 0.00 0.00 23669.07 5385.35 26556.33 00:19:53.353 [2024-11-19T09:47:00.802Z] =================================================================================================================== 00:19:53.353 [2024-11-19T09:47:00.802Z] Total : 5400.05 21.09 0.00 0.00 23669.07 5385.35 26556.33 00:19:53.353 { 00:19:53.353 "results": [ 00:19:53.353 { 00:19:53.353 "job": "TLSTESTn1", 00:19:53.353 "core_mask": "0x4", 00:19:53.353 "workload": "verify", 00:19:53.353 "status": "finished", 00:19:53.353 "verify_range": { 00:19:53.353 "start": 0, 00:19:53.353 "length": 8192 00:19:53.353 }, 00:19:53.353 "queue_depth": 128, 00:19:53.353 "io_size": 4096, 00:19:53.353 "runtime": 10.012681, 00:19:53.353 "iops": 5400.052193813026, 00:19:53.353 "mibps": 21.093953882082133, 00:19:53.353 "io_failed": 0, 00:19:53.353 "io_timeout": 0, 00:19:53.353 "avg_latency_us": 23669.069455631172, 00:19:53.353 "min_latency_us": 5385.3495652173915, 00:19:53.353 "max_latency_us": 26556.326956521738 00:19:53.353 } 00:19:53.353 ], 00:19:53.353 "core_count": 1 00:19:53.353 } 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1712705 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1712705 ']' 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1712705 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1712705 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1712705' 00:19:53.353 killing process with pid 1712705 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1712705 00:19:53.353 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.353 00:19:53.353 Latency(us) 00:19:53.353 [2024-11-19T09:47:00.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.353 [2024-11-19T09:47:00.802Z] =================================================================================================================== 00:19:53.353 [2024-11-19T09:47:00.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1712705 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1712551 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1712551 ']' 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1712551 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1712551 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1712551' 00:19:53.353 killing process with pid 1712551 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1712551 00:19:53.353 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1712551 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1714536 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1714536 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1714536 ']' 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.612 10:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.612 [2024-11-19 10:47:00.926318] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:53.612 [2024-11-19 10:47:00.926367] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.612 [2024-11-19 10:47:01.006181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.612 [2024-11-19 10:47:01.045098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.612 [2024-11-19 10:47:01.045136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.612 [2024-11-19 10:47:01.045147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.612 [2024-11-19 10:47:01.045153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.612 [2024-11-19 10:47:01.045158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.612 [2024-11-19 10:47:01.045723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.NoflwE3gor 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NoflwE3gor 00:19:53.871 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:54.130 [2024-11-19 10:47:01.354297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.131 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.389 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.389 [2024-11-19 10:47:01.751326] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.389 [2024-11-19 10:47:01.751516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.389 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.648 malloc0 00:19:54.648 10:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.907 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1714921 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1714921 /var/tmp/bdevperf.sock 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1714921 ']' 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.165 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.424 [2024-11-19 10:47:02.616620] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:55.424 [2024-11-19 10:47:02.616666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714921 ] 00:19:55.424 [2024-11-19 10:47:02.694817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.424 [2024-11-19 10:47:02.735865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.424 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.424 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:55.424 10:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:55.682 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:55.941 [2024-11-19 10:47:03.191536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.941 nvme0n1 00:19:55.941 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.941 Running I/O for 1 seconds... 00:19:57.319 5375.00 IOPS, 21.00 MiB/s 00:19:57.319 Latency(us) 00:19:57.319 [2024-11-19T09:47:04.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.319 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:57.319 Verification LBA range: start 0x0 length 0x2000 00:19:57.319 nvme0n1 : 1.02 5409.26 21.13 0.00 0.00 23483.80 6040.71 19945.74 00:19:57.319 [2024-11-19T09:47:04.768Z] =================================================================================================================== 00:19:57.319 [2024-11-19T09:47:04.768Z] Total : 5409.26 21.13 0.00 0.00 23483.80 6040.71 19945.74 00:19:57.319 { 00:19:57.319 "results": [ 00:19:57.319 { 00:19:57.319 "job": "nvme0n1", 00:19:57.319 "core_mask": "0x2", 00:19:57.319 "workload": "verify", 00:19:57.319 "status": "finished", 00:19:57.319 "verify_range": { 00:19:57.319 "start": 0, 00:19:57.319 "length": 8192 00:19:57.319 }, 00:19:57.319 "queue_depth": 128, 00:19:57.319 "io_size": 4096, 00:19:57.319 "runtime": 1.017515, 00:19:57.319 "iops": 5409.256865992147, 00:19:57.319 "mibps": 21.129909632781825, 00:19:57.319 "io_failed": 0, 00:19:57.319 "io_timeout": 0, 00:19:57.319 "avg_latency_us": 23483.798907987868, 00:19:57.319 "min_latency_us": 6040.709565217391, 00:19:57.319 "max_latency_us": 19945.739130434784 00:19:57.319 } 00:19:57.319 ], 00:19:57.319 "core_count": 1 00:19:57.319 } 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1714921 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1714921 ']' 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1714921 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1714921 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1714921' 00:19:57.319 killing process with pid 1714921 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1714921 00:19:57.319 Received shutdown signal, test time was about 1.000000 seconds 00:19:57.319 00:19:57.319 Latency(us) 00:19:57.319 [2024-11-19T09:47:04.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.319 [2024-11-19T09:47:04.768Z] =================================================================================================================== 00:19:57.319 [2024-11-19T09:47:04.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1714921 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1714536 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1714536 ']' 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1714536 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1714536 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1714536' 00:19:57.319 killing process with pid 1714536 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1714536 00:19:57.319 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1714536 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1715173 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1715173 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1715173 ']' 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.579 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.579 [2024-11-19 10:47:04.886702] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:57.579 [2024-11-19 10:47:04.886749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.579 [2024-11-19 10:47:04.965185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.579 [2024-11-19 10:47:05.001543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.579 [2024-11-19 10:47:05.001581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.579 [2024-11-19 10:47:05.001589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.579 [2024-11-19 10:47:05.001595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.579 [2024-11-19 10:47:05.001600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.579 [2024-11-19 10:47:05.002185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.838 [2024-11-19 10:47:05.146087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.838 malloc0 00:19:57.838 [2024-11-19 10:47:05.174381] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.838 [2024-11-19 10:47:05.174567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1715321 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1715321 /var/tmp/bdevperf.sock 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1715321 ']' 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.838 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.838 [2024-11-19 10:47:05.245011] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:19:57.838 [2024-11-19 10:47:05.245054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715321 ] 00:19:58.099 [2024-11-19 10:47:05.321251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.099 [2024-11-19 10:47:05.364001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.099 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.099 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.099 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NoflwE3gor 00:19:58.357 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.615 [2024-11-19 10:47:05.831478] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.615 nvme0n1 00:19:58.615 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.615 Running I/O for 1 seconds... 00:19:59.591 5360.00 IOPS, 20.94 MiB/s 00:19:59.591 Latency(us) 00:19:59.591 [2024-11-19T09:47:07.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.591 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.591 Verification LBA range: start 0x0 length 0x2000 00:19:59.591 nvme0n1 : 1.01 5418.06 21.16 0.00 0.00 23464.13 5328.36 27126.21 00:19:59.591 [2024-11-19T09:47:07.040Z] =================================================================================================================== 00:19:59.591 [2024-11-19T09:47:07.040Z] Total : 5418.06 21.16 0.00 0.00 23464.13 5328.36 27126.21 00:19:59.591 { 00:19:59.591 "results": [ 00:19:59.591 { 00:19:59.591 "job": "nvme0n1", 00:19:59.591 "core_mask": "0x2", 00:19:59.591 "workload": "verify", 00:19:59.591 "status": "finished", 00:19:59.591 "verify_range": { 00:19:59.591 "start": 0, 00:19:59.591 "length": 8192 00:19:59.591 }, 00:19:59.591 "queue_depth": 128, 00:19:59.591 "io_size": 4096, 00:19:59.591 "runtime": 1.012908, 00:19:59.591 "iops": 5418.063634604525, 00:19:59.591 "mibps": 21.164311072673925, 00:19:59.591 "io_failed": 0, 00:19:59.591 "io_timeout": 0, 00:19:59.591 "avg_latency_us": 23464.12732665737, 00:19:59.591 "min_latency_us": 5328.361739130435, 00:19:59.591 "max_latency_us": 27126.205217391303 00:19:59.591 } 00:19:59.591 ], 00:19:59.591 "core_count": 1 00:19:59.591 } 00:19:59.850 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:59.850 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.850 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.850 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.850 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:59.850 "subsystems": [ 00:19:59.850 { 00:19:59.850 "subsystem": "keyring", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "keyring_file_add_key", 00:19:59.850 "params": { 00:19:59.850 "name": "key0", 00:19:59.850 "path": "/tmp/tmp.NoflwE3gor" 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "iobuf", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "iobuf_set_options", 00:19:59.850 "params": { 00:19:59.850 "small_pool_count": 8192, 00:19:59.850 "large_pool_count": 1024, 00:19:59.850 "small_bufsize": 8192, 00:19:59.850 "large_bufsize": 135168, 00:19:59.850 "enable_numa": false 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "sock", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "sock_set_default_impl", 00:19:59.850 "params": { 00:19:59.850 "impl_name": "posix" 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "sock_impl_set_options", 00:19:59.850 "params": { 00:19:59.850 "impl_name": "ssl", 00:19:59.850 "recv_buf_size": 4096, 00:19:59.850 "send_buf_size": 4096, 00:19:59.850 "enable_recv_pipe": true, 00:19:59.850 "enable_quickack": false, 00:19:59.850 "enable_placement_id": 0, 00:19:59.850 "enable_zerocopy_send_server": true, 00:19:59.850 "enable_zerocopy_send_client": false, 00:19:59.850 "zerocopy_threshold": 0, 00:19:59.850 "tls_version": 0, 00:19:59.850 "enable_ktls": false 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "sock_impl_set_options", 00:19:59.850 "params": { 00:19:59.850 "impl_name": "posix", 00:19:59.850 "recv_buf_size": 2097152, 00:19:59.850 "send_buf_size": 2097152, 00:19:59.850 "enable_recv_pipe": true, 00:19:59.850 "enable_quickack": false, 00:19:59.850 "enable_placement_id": 0, 00:19:59.850 "enable_zerocopy_send_server": true, 00:19:59.850 "enable_zerocopy_send_client": false, 00:19:59.850 "zerocopy_threshold": 0, 00:19:59.850 "tls_version": 0, 00:19:59.850 "enable_ktls": false 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "vmd", 00:19:59.850 "config": [] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "accel", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "accel_set_options", 00:19:59.850 "params": { 00:19:59.850 "small_cache_size": 128, 00:19:59.850 "large_cache_size": 16, 00:19:59.850 "task_count": 2048, 00:19:59.850 "sequence_count": 2048, 00:19:59.850 "buf_count": 2048 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "bdev", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "bdev_set_options", 00:19:59.850 "params": { 00:19:59.850 "bdev_io_pool_size": 65535, 00:19:59.850 "bdev_io_cache_size": 256, 00:19:59.850 "bdev_auto_examine": true, 00:19:59.850 "iobuf_small_cache_size": 128, 00:19:59.850 "iobuf_large_cache_size": 16 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_raid_set_options", 00:19:59.850 "params": { 00:19:59.850 "process_window_size_kb": 1024, 00:19:59.850 "process_max_bandwidth_mb_sec": 0 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_iscsi_set_options", 00:19:59.850 "params": { 00:19:59.850 "timeout_sec": 30 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_nvme_set_options", 00:19:59.850 "params": { 00:19:59.850 "action_on_timeout": "none", 00:19:59.850 "timeout_us": 0, 00:19:59.850 "timeout_admin_us": 0, 00:19:59.850 "keep_alive_timeout_ms": 10000, 00:19:59.850 "arbitration_burst": 0, 00:19:59.850 "low_priority_weight": 0, 00:19:59.850 "medium_priority_weight": 0, 00:19:59.850 "high_priority_weight": 0, 00:19:59.850 "nvme_adminq_poll_period_us": 10000, 00:19:59.850 "nvme_ioq_poll_period_us": 0, 00:19:59.850 "io_queue_requests": 0, 00:19:59.850 "delay_cmd_submit": true, 00:19:59.850 "transport_retry_count": 4, 00:19:59.850 "bdev_retry_count": 3, 00:19:59.850 "transport_ack_timeout": 0, 00:19:59.850 "ctrlr_loss_timeout_sec": 0, 00:19:59.850 "reconnect_delay_sec": 0, 00:19:59.850 "fast_io_fail_timeout_sec": 0, 00:19:59.850 "disable_auto_failback": false, 00:19:59.850 "generate_uuids": false, 00:19:59.850 "transport_tos": 0, 00:19:59.850 "nvme_error_stat": false, 00:19:59.850 "rdma_srq_size": 0, 00:19:59.850 "io_path_stat": false, 00:19:59.850 "allow_accel_sequence": false, 00:19:59.850 "rdma_max_cq_size": 0, 00:19:59.850 "rdma_cm_event_timeout_ms": 0, 00:19:59.850 "dhchap_digests": [ 00:19:59.850 "sha256", 00:19:59.850 "sha384", 00:19:59.850 "sha512" 00:19:59.850 ], 00:19:59.850 "dhchap_dhgroups": [ 00:19:59.850 "null", 00:19:59.850 "ffdhe2048", 00:19:59.850 "ffdhe3072", 00:19:59.850 "ffdhe4096", 00:19:59.850 "ffdhe6144", 00:19:59.850 "ffdhe8192" 00:19:59.850 ] 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_nvme_set_hotplug", 00:19:59.850 "params": { 00:19:59.850 "period_us": 100000, 00:19:59.850 "enable": false 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_malloc_create", 00:19:59.850 "params": { 00:19:59.850 "name": "malloc0", 00:19:59.850 "num_blocks": 8192, 00:19:59.850 "block_size": 4096, 00:19:59.850 "physical_block_size": 4096, 00:19:59.850 "uuid": "08397c3b-f86c-4fdb-a1fc-6f31137fb7a0", 00:19:59.850 "optimal_io_boundary": 0, 00:19:59.850 "md_size": 0, 00:19:59.850 "dif_type": 0, 00:19:59.850 "dif_is_head_of_md": false, 00:19:59.850 "dif_pi_format": 0 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_wait_for_examine" 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "nbd", 00:19:59.850 "config": [] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "scheduler", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "framework_set_scheduler", 00:19:59.850 "params": { 00:19:59.850 "name": "static" 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "nvmf", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "nvmf_set_config", 00:19:59.850 "params": { 00:19:59.850 "discovery_filter": "match_any", 00:19:59.850 "admin_cmd_passthru": { 00:19:59.850 "identify_ctrlr": false 00:19:59.850 }, 00:19:59.850 "dhchap_digests": [ 00:19:59.850 "sha256", 00:19:59.850 "sha384", 00:19:59.850 "sha512" 00:19:59.850 ], 00:19:59.850 "dhchap_dhgroups": [ 00:19:59.850 "null", 00:19:59.850 "ffdhe2048", 00:19:59.850 "ffdhe3072", 00:19:59.850 "ffdhe4096", 00:19:59.850 "ffdhe6144", 00:19:59.850 "ffdhe8192" 00:19:59.850 ] 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_set_max_subsystems", 00:19:59.850 "params": { 00:19:59.850 "max_subsystems": 1024 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_set_crdt", 00:19:59.850 "params": { 00:19:59.850 "crdt1": 0, 00:19:59.850 "crdt2": 0, 00:19:59.850 "crdt3": 0 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_create_transport", 00:19:59.850 "params": { 00:19:59.850 "trtype": "TCP", 00:19:59.850 "max_queue_depth": 128, 00:19:59.850 "max_io_qpairs_per_ctrlr": 127, 00:19:59.850 "in_capsule_data_size": 4096, 00:19:59.850 "max_io_size": 131072, 00:19:59.850 "io_unit_size": 131072, 00:19:59.850 "max_aq_depth": 128, 00:19:59.850 "num_shared_buffers": 511, 00:19:59.850 "buf_cache_size": 4294967295, 00:19:59.850 "dif_insert_or_strip": false, 00:19:59.850 "zcopy": false, 00:19:59.850 "c2h_success": false, 00:19:59.850 "sock_priority": 0, 00:19:59.850 "abort_timeout_sec": 1, 00:19:59.850 "ack_timeout": 0, 00:19:59.850 "data_wr_pool_size": 0 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.851 "method": "nvmf_create_subsystem", 00:19:59.851 "params": { 00:19:59.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.851 "allow_any_host": false, 00:19:59.851 "serial_number": "00000000000000000000", 00:19:59.851 "model_number": "SPDK bdev Controller", 00:19:59.851 "max_namespaces": 32, 00:19:59.851 "min_cntlid": 1, 00:19:59.851 "max_cntlid": 65519, 00:19:59.851 "ana_reporting": false 00:19:59.851 } 00:19:59.851 }, 00:19:59.851 { 00:19:59.851 "method": "nvmf_subsystem_add_host", 00:19:59.851 "params": { 00:19:59.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.851 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.851 "psk": "key0" 00:19:59.851 } 00:19:59.851 }, 00:19:59.851 { 00:19:59.851 "method": "nvmf_subsystem_add_ns", 00:19:59.851 "params": { 00:19:59.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.851 "namespace": { 00:19:59.851 "nsid": 1, 00:19:59.851 "bdev_name": "malloc0", 00:19:59.851 "nguid": "08397C3BF86C4FDBA1FC6F31137FB7A0", 00:19:59.851 "uuid": "08397c3b-f86c-4fdb-a1fc-6f31137fb7a0", 00:19:59.851 "no_auto_visible": false 00:19:59.851 } 00:19:59.851 } 00:19:59.851 }, 00:19:59.851 { 00:19:59.851 "method": "nvmf_subsystem_add_listener", 00:19:59.851 "params": { 00:19:59.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.851 "listen_address": { 00:19:59.851 "trtype": "TCP", 00:19:59.851 "adrfam": "IPv4", 00:19:59.851 "traddr": "10.0.0.2", 00:19:59.851 "trsvcid": "4420" 00:19:59.851 }, 00:19:59.851 "secure_channel": false, 00:19:59.851 "sock_impl": "ssl" 00:19:59.851 } 00:19:59.851 } 00:19:59.851 ] 00:19:59.851 } 00:19:59.851 ] 00:19:59.851 }' 00:19:59.851 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:00.110 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:00.110 "subsystems": [ 00:20:00.110 { 00:20:00.110 "subsystem": "keyring", 00:20:00.110 "config": [ 00:20:00.110 { 00:20:00.110 "method": "keyring_file_add_key", 00:20:00.110 "params": { 00:20:00.110 "name": "key0", 00:20:00.110 "path": "/tmp/tmp.NoflwE3gor" 00:20:00.110 } 00:20:00.110 } 00:20:00.110 ] 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "subsystem": "iobuf", 00:20:00.110 "config": [ 00:20:00.110 { 00:20:00.110 "method": "iobuf_set_options", 00:20:00.110 "params": { 00:20:00.110 "small_pool_count": 8192, 00:20:00.110 "large_pool_count": 1024, 00:20:00.110 "small_bufsize": 8192, 00:20:00.110 "large_bufsize": 135168, 00:20:00.110 "enable_numa": false 00:20:00.110 } 00:20:00.110 } 00:20:00.110 ] 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "subsystem": "sock", 00:20:00.110 "config": [ 00:20:00.110 { 00:20:00.110 "method": "sock_set_default_impl", 00:20:00.110 "params": { 00:20:00.110 "impl_name": "posix" 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "sock_impl_set_options", 00:20:00.110 "params": { 00:20:00.110 "impl_name": "ssl", 00:20:00.110 "recv_buf_size": 4096, 00:20:00.110 "send_buf_size": 4096, 00:20:00.110 "enable_recv_pipe": true, 00:20:00.110 "enable_quickack": false, 00:20:00.110 "enable_placement_id": 0, 00:20:00.110 "enable_zerocopy_send_server": true, 00:20:00.110 "enable_zerocopy_send_client": false, 00:20:00.110 "zerocopy_threshold": 0, 00:20:00.110 "tls_version": 0, 00:20:00.110 "enable_ktls": false 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "sock_impl_set_options", 00:20:00.110 "params": { 00:20:00.110 "impl_name": "posix", 00:20:00.110 "recv_buf_size": 2097152, 00:20:00.110 "send_buf_size": 2097152, 00:20:00.110 "enable_recv_pipe": true, 00:20:00.110 "enable_quickack": false, 00:20:00.110 "enable_placement_id": 0, 00:20:00.110 "enable_zerocopy_send_server": true, 00:20:00.110 "enable_zerocopy_send_client": false, 00:20:00.110 "zerocopy_threshold": 0, 00:20:00.110 "tls_version": 0, 00:20:00.110 "enable_ktls": false 00:20:00.110 } 00:20:00.110 } 00:20:00.110 ] 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "subsystem": "vmd", 00:20:00.110 "config": [] 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "subsystem": "accel", 00:20:00.110 "config": [ 00:20:00.110 { 00:20:00.110 "method": "accel_set_options", 00:20:00.110 "params": { 00:20:00.110 "small_cache_size": 128, 00:20:00.110 "large_cache_size": 16, 00:20:00.110 "task_count": 2048, 00:20:00.110 "sequence_count": 2048, 00:20:00.110 "buf_count": 2048 00:20:00.110 } 00:20:00.110 } 00:20:00.110 ] 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "subsystem": "bdev", 00:20:00.110 "config": [ 00:20:00.110 { 00:20:00.110 "method": "bdev_set_options", 00:20:00.110 "params": { 00:20:00.110 "bdev_io_pool_size": 65535, 00:20:00.110 "bdev_io_cache_size": 256, 00:20:00.110 "bdev_auto_examine": true, 00:20:00.110 "iobuf_small_cache_size": 128, 00:20:00.110 "iobuf_large_cache_size": 16 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_raid_set_options", 00:20:00.110 "params": { 00:20:00.110 "process_window_size_kb": 1024, 00:20:00.110 "process_max_bandwidth_mb_sec": 0 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_iscsi_set_options", 00:20:00.110 "params": { 00:20:00.110 "timeout_sec": 30 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_nvme_set_options", 00:20:00.110 "params": { 00:20:00.110 "action_on_timeout": "none", 00:20:00.110 "timeout_us": 0, 00:20:00.110 "timeout_admin_us": 0, 00:20:00.110 "keep_alive_timeout_ms": 10000, 00:20:00.110 "arbitration_burst": 0, 00:20:00.110 "low_priority_weight": 0, 00:20:00.110 "medium_priority_weight": 0, 00:20:00.110 "high_priority_weight": 0, 00:20:00.110 "nvme_adminq_poll_period_us": 10000, 00:20:00.110 "nvme_ioq_poll_period_us": 0, 00:20:00.110 "io_queue_requests": 512, 00:20:00.110 "delay_cmd_submit": true, 00:20:00.110 "transport_retry_count": 4, 00:20:00.110 "bdev_retry_count": 3, 00:20:00.110 "transport_ack_timeout": 0, 00:20:00.110 "ctrlr_loss_timeout_sec": 0, 00:20:00.110 "reconnect_delay_sec": 0, 00:20:00.110 "fast_io_fail_timeout_sec": 0, 00:20:00.110 "disable_auto_failback": false, 00:20:00.110 "generate_uuids": false, 00:20:00.110 "transport_tos": 0, 00:20:00.110 "nvme_error_stat": false, 00:20:00.110 "rdma_srq_size": 0, 00:20:00.110 "io_path_stat": false, 00:20:00.110 "allow_accel_sequence": false, 00:20:00.110 "rdma_max_cq_size": 0, 00:20:00.110 "rdma_cm_event_timeout_ms": 0, 00:20:00.110 "dhchap_digests": [ 00:20:00.110 "sha256", 00:20:00.110 "sha384", 00:20:00.110 "sha512" 00:20:00.110 ], 00:20:00.110 "dhchap_dhgroups": [ 00:20:00.110 "null", 00:20:00.110 "ffdhe2048", 00:20:00.110 "ffdhe3072", 00:20:00.110 "ffdhe4096", 00:20:00.110 "ffdhe6144", 00:20:00.110 "ffdhe8192" 00:20:00.110 ] 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_nvme_attach_controller", 00:20:00.110 "params": { 00:20:00.110 "name": "nvme0", 00:20:00.110 "trtype": "TCP", 00:20:00.111 "adrfam": "IPv4", 00:20:00.111 "traddr": "10.0.0.2", 00:20:00.111 "trsvcid": "4420", 00:20:00.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.111 "prchk_reftag": false, 00:20:00.111 "prchk_guard": false, 00:20:00.111 "ctrlr_loss_timeout_sec": 0, 00:20:00.111 "reconnect_delay_sec": 0, 00:20:00.111 "fast_io_fail_timeout_sec": 0, 00:20:00.111 "psk": "key0", 00:20:00.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.111 "hdgst": false, 00:20:00.111 "ddgst": false, 00:20:00.111 "multipath": "multipath" 00:20:00.111 } 00:20:00.111 }, 00:20:00.111 { 00:20:00.111 "method": "bdev_nvme_set_hotplug", 00:20:00.111 "params": { 00:20:00.111 "period_us": 100000, 00:20:00.111 "enable": false 00:20:00.111 } 00:20:00.111 }, 00:20:00.111 { 00:20:00.111 "method": "bdev_enable_histogram", 00:20:00.111 "params": { 00:20:00.111 "name": "nvme0n1", 00:20:00.111 "enable": true 00:20:00.111 } 00:20:00.111 }, 00:20:00.111 { 00:20:00.111 "method": "bdev_wait_for_examine" 00:20:00.111 } 00:20:00.111 ] 00:20:00.111 }, 00:20:00.111 { 00:20:00.111 "subsystem": "nbd", 00:20:00.111 "config": [] 00:20:00.111 } 00:20:00.111 ] 00:20:00.111 }' 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1715321 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1715321 ']' 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1715321 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715321 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715321' 00:20:00.111 killing process with pid 1715321 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1715321 00:20:00.111 Received shutdown signal, test time was about 1.000000 seconds 00:20:00.111 00:20:00.111 Latency(us) 00:20:00.111 [2024-11-19T09:47:07.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.111 [2024-11-19T09:47:07.560Z] =================================================================================================================== 00:20:00.111 [2024-11-19T09:47:07.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.111 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1715321 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1715173 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1715173 ']' 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1715173 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715173 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715173' 00:20:00.370 killing process with pid 1715173 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1715173 00:20:00.370 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1715173 00:20:00.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:00.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:00.628 "subsystems": [ 00:20:00.628 { 00:20:00.628 "subsystem": "keyring", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "keyring_file_add_key", 00:20:00.628 "params": { 00:20:00.628 "name": "key0", 00:20:00.628 "path": "/tmp/tmp.NoflwE3gor" 00:20:00.628 } 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "iobuf", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "iobuf_set_options", 00:20:00.628 "params": { 00:20:00.628 "small_pool_count": 8192, 00:20:00.628 "large_pool_count": 1024, 00:20:00.628 "small_bufsize": 8192, 00:20:00.628 "large_bufsize": 135168, 00:20:00.628 "enable_numa": false 00:20:00.628 } 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "sock", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "sock_set_default_impl", 00:20:00.628 "params": { 00:20:00.628 "impl_name": "posix" 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "sock_impl_set_options", 00:20:00.628 "params": { 00:20:00.628 "impl_name": "ssl", 00:20:00.628 "recv_buf_size": 4096, 00:20:00.628 "send_buf_size": 4096, 00:20:00.628 "enable_recv_pipe": true, 00:20:00.628 "enable_quickack": false, 00:20:00.628 "enable_placement_id": 0, 00:20:00.628 "enable_zerocopy_send_server": true, 00:20:00.628 "enable_zerocopy_send_client": false, 00:20:00.628 "zerocopy_threshold": 0, 00:20:00.628 "tls_version": 0, 00:20:00.628 "enable_ktls": false 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "sock_impl_set_options", 00:20:00.628 "params": { 00:20:00.628 "impl_name": "posix", 00:20:00.628 "recv_buf_size": 2097152, 00:20:00.628 "send_buf_size": 2097152, 00:20:00.628 "enable_recv_pipe": true, 00:20:00.628 "enable_quickack": false, 00:20:00.628 "enable_placement_id": 0, 00:20:00.628 "enable_zerocopy_send_server": true, 00:20:00.628 "enable_zerocopy_send_client": false, 00:20:00.628 "zerocopy_threshold": 0, 00:20:00.628 "tls_version": 0, 00:20:00.628 "enable_ktls": false 00:20:00.628 } 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "vmd", 00:20:00.628 "config": [] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "accel", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "accel_set_options", 00:20:00.628 "params": { 00:20:00.628 "small_cache_size": 128, 00:20:00.628 "large_cache_size": 16, 00:20:00.628 "task_count": 2048, 00:20:00.628 "sequence_count": 2048, 00:20:00.628 "buf_count": 2048 00:20:00.628 } 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "bdev", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "bdev_set_options", 00:20:00.628 "params": { 00:20:00.628 "bdev_io_pool_size": 65535, 00:20:00.628 "bdev_io_cache_size": 256, 00:20:00.628 "bdev_auto_examine": true, 00:20:00.628 "iobuf_small_cache_size": 128, 00:20:00.628 "iobuf_large_cache_size": 16 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_raid_set_options", 00:20:00.628 "params": { 00:20:00.628 "process_window_size_kb": 1024, 00:20:00.628 "process_max_bandwidth_mb_sec": 0 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_iscsi_set_options", 00:20:00.628 "params": { 00:20:00.628 "timeout_sec": 30 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_nvme_set_options", 00:20:00.628 "params": { 00:20:00.628 "action_on_timeout": "none", 00:20:00.628 "timeout_us": 0, 00:20:00.628 "timeout_admin_us": 0, 00:20:00.628 "keep_alive_timeout_ms": 10000, 00:20:00.628 "arbitration_burst": 0, 00:20:00.628 "low_priority_weight": 0, 00:20:00.628 "medium_priority_weight": 0, 00:20:00.628 "high_priority_weight": 0, 00:20:00.628 "nvme_adminq_poll_period_us": 10000, 00:20:00.628 "nvme_ioq_poll_period_us": 0, 00:20:00.628 "io_queue_requests": 0, 00:20:00.628 "delay_cmd_submit": true, 00:20:00.628 "transport_retry_count": 4, 00:20:00.628 "bdev_retry_count": 3, 00:20:00.628 "transport_ack_timeout": 0, 00:20:00.628 "ctrlr_loss_timeout_sec": 0, 00:20:00.628 "reconnect_delay_sec": 0, 00:20:00.628 "fast_io_fail_timeout_sec": 0, 00:20:00.628 "disable_auto_failback": false, 00:20:00.628 "generate_uuids": false, 00:20:00.628 "transport_tos": 0, 00:20:00.628 "nvme_error_stat": false, 00:20:00.628 "rdma_srq_size": 0, 00:20:00.628 "io_path_stat": false, 00:20:00.628 "allow_accel_sequence": false, 00:20:00.628 "rdma_max_cq_size": 0, 00:20:00.628 "rdma_cm_event_timeout_ms": 0, 00:20:00.628 "dhchap_digests": [ 00:20:00.628 "sha256", 00:20:00.628 "sha384", 00:20:00.628 "sha512" 00:20:00.628 ], 00:20:00.628 "dhchap_dhgroups": [ 00:20:00.628 "null", 00:20:00.628 "ffdhe2048", 00:20:00.628 "ffdhe3072", 00:20:00.628 "ffdhe4096", 00:20:00.628 "ffdhe6144", 00:20:00.628 "ffdhe8192" 00:20:00.628 ] 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_nvme_set_hotplug", 00:20:00.628 "params": { 00:20:00.628 "period_us": 100000, 00:20:00.628 "enable": false 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_malloc_create", 00:20:00.628 "params": { 00:20:00.628 "name": "malloc0", 00:20:00.628 "num_blocks": 8192, 00:20:00.629 "block_size": 4096, 00:20:00.629 "physical_block_size": 4096, 00:20:00.629 "uuid": "08397c3b-f86c-4fdb-a1fc-6f31137fb7a0", 00:20:00.629 "optimal_io_boundary": 0, 00:20:00.629 "md_size": 0, 00:20:00.629 "dif_type": 0, 00:20:00.629 "dif_is_head_of_md": false, 00:20:00.629 "dif_pi_format": 0 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "bdev_wait_for_examine" 00:20:00.629 } 00:20:00.629 ] 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "subsystem": "nbd", 00:20:00.629 "config": [] 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "subsystem": "scheduler", 00:20:00.629 "config": [ 00:20:00.629 { 00:20:00.629 "method": "framework_set_scheduler", 00:20:00.629 "params": { 00:20:00.629 "name": "static" 00:20:00.629 } 00:20:00.629 } 00:20:00.629 ] 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "subsystem": "nvmf", 00:20:00.629 "config": [ 00:20:00.629 { 00:20:00.629 "method": "nvmf_set_config", 00:20:00.629 "params": { 00:20:00.629 "discovery_filter": "match_any", 00:20:00.629 "admin_cmd_passthru": { 00:20:00.629 "identify_ctrlr": false 00:20:00.629 }, 00:20:00.629 "dhchap_digests": [ 00:20:00.629 "sha256", 00:20:00.629 "sha384", 00:20:00.629 "sha512" 00:20:00.629 ], 00:20:00.629 "dhchap_dhgroups": [ 00:20:00.629 "null", 00:20:00.629 "ffdhe2048", 00:20:00.629 "ffdhe3072", 00:20:00.629 "ffdhe4096", 00:20:00.629 "ffdhe6144", 00:20:00.629 "ffdhe8192" 00:20:00.629 ] 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "nvmf_set_max_subsystems", 00:20:00.629 "params": { 00:20:00.629 "max_subsystems": 1024 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "nvmf_set_crdt", 00:20:00.629 "params": { 00:20:00.629 "crdt1": 0, 00:20:00.629 "crdt2": 0, 00:20:00.629 "crdt3": 0 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "nvmf_create_transport", 00:20:00.629 "params": { 00:20:00.629 "trtype": "TCP", 00:20:00.629 "max_queue_depth": 128, 00:20:00.629 "max_io_qpairs_per_ctrlr": 127, 00:20:00.629 "in_capsule_data_size": 4096, 00:20:00.629 "max_io_size": 131072, 00:20:00.629 "io_unit_size": 131072, 00:20:00.629 "max_aq_depth": 128, 00:20:00.629 "num_shared_buffers": 511, 00:20:00.629 "buf_cache_size": 4294967295, 00:20:00.629 "dif_insert_or_strip": false, 00:20:00.629 "zcopy": false, 00:20:00.629 "c2h_success": false, 00:20:00.629 "sock_priority": 0, 00:20:00.629 "abort_timeout_sec": 1, 00:20:00.629 "ack_timeout": 0, 00:20:00.629 "data_wr_pool_size": 0 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "nvmf_create_subsystem", 00:20:00.629 "params": { 00:20:00.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.629 "allow_any_host": false, 00:20:00.629 "serial_number": "00000000000000000000", 00:20:00.629 "model_number": "SPDK bdev Controller", 00:20:00.629 "max_namespaces": 32, 00:20:00.629 "min_cntlid": 1, 00:20:00.629 "max_cntlid": 65519, 00:20:00.629 "ana_reporting": false 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "nvmf_subsystem_add_host", 00:20:00.629 "params": { 00:20:00.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.629 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.629 "psk": "key0" 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "nvmf_subsystem_add_ns", 00:20:00.629 "params": { 00:20:00.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.629 "namespace": { 00:20:00.629 "nsid": 1, 00:20:00.629 "bdev_name": "malloc0", 00:20:00.629 "nguid": "08397C3BF86C4FDBA1FC6F31137FB7A0", 00:20:00.629 "uuid": "08397c3b-f86c-4fdb-a1fc-6f31137fb7a0", 00:20:00.629 "no_auto_visible": false 00:20:00.629 } 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "nvmf_subsystem_add_listener", 00:20:00.629 "params": { 00:20:00.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.629 "listen_address": { 00:20:00.629 "trtype": "TCP", 00:20:00.629 "adrfam": "IPv4", 00:20:00.629 "traddr": "10.0.0.2", 00:20:00.629 "trsvcid": "4420" 00:20:00.629 }, 00:20:00.629 "secure_channel": false, 00:20:00.629 "sock_impl": "ssl" 00:20:00.629 } 00:20:00.629 } 00:20:00.629 ] 00:20:00.629 } 00:20:00.629 ] 00:20:00.629 }' 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1715689 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1715689 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1715689 ']' 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.629 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.629 [2024-11-19 10:47:07.914409] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:00.629 [2024-11-19 10:47:07.914457] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.629 [2024-11-19 10:47:07.993987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.629 [2024-11-19 10:47:08.034812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.629 [2024-11-19 10:47:08.034849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.629 [2024-11-19 10:47:08.034856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.629 [2024-11-19 10:47:08.034862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.629 [2024-11-19 10:47:08.034867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.629 [2024-11-19 10:47:08.035482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.887 [2024-11-19 10:47:08.249328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.887 [2024-11-19 10:47:08.281344] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.887 [2024-11-19 10:47:08.281537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1715917 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1715917 /var/tmp/bdevperf.sock 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1715917 ']' 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.455 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:01.455 "subsystems": [ 00:20:01.455 { 00:20:01.455 "subsystem": "keyring", 00:20:01.455 "config": [ 00:20:01.455 { 00:20:01.455 "method": "keyring_file_add_key", 00:20:01.455 "params": { 00:20:01.455 "name": "key0", 00:20:01.455 "path": "/tmp/tmp.NoflwE3gor" 00:20:01.455 } 00:20:01.455 } 00:20:01.455 ] 00:20:01.455 }, 00:20:01.455 { 00:20:01.455 "subsystem": "iobuf", 00:20:01.455 "config": [ 00:20:01.455 { 00:20:01.455 "method": "iobuf_set_options", 00:20:01.455 "params": { 00:20:01.455 "small_pool_count": 8192, 00:20:01.455 "large_pool_count": 1024, 00:20:01.455 "small_bufsize": 8192, 00:20:01.455 "large_bufsize": 135168, 00:20:01.455 "enable_numa": false 00:20:01.455 } 00:20:01.455 } 00:20:01.455 ] 00:20:01.455 }, 00:20:01.455 { 00:20:01.455 "subsystem": "sock", 00:20:01.455 "config": [ 00:20:01.455 { 00:20:01.455 "method": "sock_set_default_impl", 00:20:01.455 "params": { 00:20:01.455 "impl_name": "posix" 00:20:01.455 } 00:20:01.455 }, 00:20:01.455 { 00:20:01.455 "method": "sock_impl_set_options", 00:20:01.455 "params": { 00:20:01.455 "impl_name": "ssl", 00:20:01.455 "recv_buf_size": 4096, 00:20:01.455 "send_buf_size": 4096, 00:20:01.455 "enable_recv_pipe": true, 00:20:01.455 "enable_quickack": false, 00:20:01.455 "enable_placement_id": 0, 00:20:01.455 "enable_zerocopy_send_server": true, 00:20:01.455 "enable_zerocopy_send_client": false, 00:20:01.455 "zerocopy_threshold": 0, 00:20:01.455 "tls_version": 0, 00:20:01.455 "enable_ktls": false 00:20:01.455 } 00:20:01.455 }, 00:20:01.455 { 00:20:01.455 "method": "sock_impl_set_options", 00:20:01.455 "params": { 00:20:01.455 "impl_name": "posix", 00:20:01.455 "recv_buf_size": 2097152, 00:20:01.455 "send_buf_size": 2097152, 00:20:01.455 "enable_recv_pipe": true, 00:20:01.456 "enable_quickack": false, 00:20:01.456 "enable_placement_id": 0, 00:20:01.456 "enable_zerocopy_send_server": true, 00:20:01.456 "enable_zerocopy_send_client": false, 00:20:01.456 "zerocopy_threshold": 0, 00:20:01.456 "tls_version": 0, 00:20:01.456 "enable_ktls": false 00:20:01.456 } 00:20:01.456 } 00:20:01.456 ] 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "subsystem": "vmd", 00:20:01.456 "config": [] 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "subsystem": "accel", 00:20:01.456 "config": [ 00:20:01.456 { 00:20:01.456 "method": "accel_set_options", 00:20:01.456 "params": { 00:20:01.456 "small_cache_size": 128, 00:20:01.456 "large_cache_size": 16, 00:20:01.456 "task_count": 2048, 00:20:01.456 "sequence_count": 2048, 00:20:01.456 "buf_count": 2048 00:20:01.456 } 00:20:01.456 } 00:20:01.456 ] 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "subsystem": "bdev", 00:20:01.456 "config": [ 00:20:01.456 { 00:20:01.456 "method": "bdev_set_options", 00:20:01.456 "params": { 00:20:01.456 "bdev_io_pool_size": 65535, 00:20:01.456 "bdev_io_cache_size": 256, 00:20:01.456 "bdev_auto_examine": true, 00:20:01.456 "iobuf_small_cache_size": 128, 00:20:01.456 "iobuf_large_cache_size": 16 00:20:01.456 } 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "method": "bdev_raid_set_options", 00:20:01.456 "params": { 00:20:01.456 "process_window_size_kb": 1024, 00:20:01.456 "process_max_bandwidth_mb_sec": 0 00:20:01.456 } 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "method": "bdev_iscsi_set_options", 00:20:01.456 "params": { 00:20:01.456 "timeout_sec": 30 00:20:01.456 } 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "method": "bdev_nvme_set_options", 00:20:01.456 "params": { 00:20:01.456 "action_on_timeout": "none", 00:20:01.456 "timeout_us": 0, 00:20:01.456 "timeout_admin_us": 0, 00:20:01.456 "keep_alive_timeout_ms": 10000, 00:20:01.456 "arbitration_burst": 0, 00:20:01.456 "low_priority_weight": 0, 00:20:01.456 "medium_priority_weight": 0, 00:20:01.456 "high_priority_weight": 0, 00:20:01.456 "nvme_adminq_poll_period_us": 10000, 00:20:01.456 "nvme_ioq_poll_period_us": 0, 00:20:01.456 "io_queue_requests": 512, 00:20:01.456 "delay_cmd_submit": true, 00:20:01.456 "transport_retry_count": 4, 00:20:01.456 "bdev_retry_count": 3, 00:20:01.456 "transport_ack_timeout": 0, 00:20:01.456 "ctrlr_loss_timeout_sec": 0, 00:20:01.456 "reconnect_delay_sec": 0, 00:20:01.456 "fast_io_fail_timeout_sec": 0, 00:20:01.456 "disable_auto_failback": false, 00:20:01.456 "generate_uuids": false, 00:20:01.456 "transport_tos": 0, 00:20:01.456 "nvme_error_stat": false, 00:20:01.456 "rdma_srq_size": 0, 00:20:01.456 "io_path_stat": false, 00:20:01.456 "allow_accel_sequence": false, 00:20:01.456 "rdma_max_cq_size": 0, 00:20:01.456 "rdma_cm_event_timeout_ms": 0, 00:20:01.456 "dhchap_digests": [ 00:20:01.456 "sha256", 00:20:01.456 "sha384", 00:20:01.456 "sha512" 00:20:01.456 ], 00:20:01.456 "dhchap_dhgroups": [ 00:20:01.456 "null", 00:20:01.456 "ffdhe2048", 00:20:01.456 "ffdhe3072", 00:20:01.456 "ffdhe4096", 00:20:01.456 "ffdhe6144", 00:20:01.456 "ffdhe8192" 00:20:01.456 ] 00:20:01.456 } 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "method": "bdev_nvme_attach_controller", 00:20:01.456 "params": { 00:20:01.456 "name": "nvme0", 00:20:01.456 "trtype": "TCP", 00:20:01.456 "adrfam": "IPv4", 00:20:01.456 "traddr": "10.0.0.2", 00:20:01.456 "trsvcid": "4420", 00:20:01.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.456 "prchk_reftag": false, 00:20:01.456 "prchk_guard": false, 00:20:01.456 "ctrlr_loss_timeout_sec": 0, 00:20:01.456 "reconnect_delay_sec": 0, 00:20:01.456 "fast_io_fail_timeout_sec": 0, 00:20:01.456 "psk": "key0", 00:20:01.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.456 "hdgst": false, 00:20:01.456 "ddgst": false, 00:20:01.456 "multipath": "multipath" 00:20:01.456 } 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "method": "bdev_nvme_set_hotplug", 00:20:01.456 "params": { 00:20:01.456 "period_us": 100000, 00:20:01.456 "enable": false 00:20:01.456 } 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "method": "bdev_enable_histogram", 00:20:01.456 "params": { 00:20:01.456 "name": "nvme0n1", 00:20:01.456 "enable": true 00:20:01.456 } 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "method": "bdev_wait_for_examine" 00:20:01.456 } 00:20:01.456 ] 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "subsystem": "nbd", 00:20:01.456 "config": [] 00:20:01.456 } 00:20:01.456 ] 00:20:01.456 }' 00:20:01.456 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.456 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.456 [2024-11-19 10:47:08.831533] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:01.456 [2024-11-19 10:47:08.831578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715917 ] 00:20:01.715 [2024-11-19 10:47:08.906782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.715 [2024-11-19 10:47:08.947539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.715 [2024-11-19 10:47:09.099898] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.282 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.282 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.282 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.282 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:02.541 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.541 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.541 Running I/O for 1 seconds... 00:20:03.918 5164.00 IOPS, 20.17 MiB/s 00:20:03.918 Latency(us) 00:20:03.918 [2024-11-19T09:47:11.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.918 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.918 Verification LBA range: start 0x0 length 0x2000 00:20:03.918 nvme0n1 : 1.03 5162.75 20.17 0.00 0.00 24554.75 5014.93 31685.23 00:20:03.918 [2024-11-19T09:47:11.367Z] =================================================================================================================== 00:20:03.918 [2024-11-19T09:47:11.367Z] Total : 5162.75 20.17 0.00 0.00 24554.75 5014.93 31685.23 00:20:03.918 { 00:20:03.918 "results": [ 00:20:03.918 { 00:20:03.918 "job": "nvme0n1", 00:20:03.918 "core_mask": "0x2", 00:20:03.918 "workload": "verify", 00:20:03.918 "status": "finished", 00:20:03.918 "verify_range": { 00:20:03.918 "start": 0, 00:20:03.918 "length": 8192 00:20:03.918 }, 00:20:03.918 "queue_depth": 128, 00:20:03.918 "io_size": 4096, 00:20:03.918 "runtime": 1.025036, 00:20:03.918 "iops": 5162.745503572557, 00:20:03.918 "mibps": 20.1669746233303, 00:20:03.918 "io_failed": 0, 00:20:03.918 "io_timeout": 0, 00:20:03.918 "avg_latency_us": 24554.748020638206, 00:20:03.918 "min_latency_us": 5014.928695652174, 00:20:03.918 "max_latency_us": 31685.231304347824 00:20:03.918 } 00:20:03.918 ], 00:20:03.918 "core_count": 1 00:20:03.918 } 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:03.918 nvmf_trace.0 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1715917 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1715917 ']' 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1715917 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715917 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715917' 00:20:03.918 killing process with pid 1715917 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1715917 00:20:03.918 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.918 00:20:03.918 Latency(us) 00:20:03.918 [2024-11-19T09:47:11.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.918 [2024-11-19T09:47:11.367Z] =================================================================================================================== 00:20:03.918 [2024-11-19T09:47:11.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.918 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1715917 00:20:03.919 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:03.919 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:03.919 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:03.919 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.919 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:03.919 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.919 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.919 rmmod nvme_tcp 00:20:03.919 rmmod nvme_fabrics 00:20:03.919 rmmod nvme_keyring 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1715689 ']' 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1715689 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1715689 ']' 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1715689 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715689 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715689' 00:20:04.177 killing process with pid 1715689 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1715689 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1715689 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.177 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.GgEl8KWgZx /tmp/tmp.mTi2bX3ntL /tmp/tmp.NoflwE3gor 00:20:06.714 00:20:06.714 real 1m19.734s 00:20:06.714 user 2m2.017s 00:20:06.714 sys 0m30.781s 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.714 ************************************ 00:20:06.714 END TEST nvmf_tls 00:20:06.714 ************************************ 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:06.714 ************************************ 00:20:06.714 START TEST nvmf_fips 00:20:06.714 ************************************ 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.714 * Looking for test storage... 00:20:06.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:06.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.714 --rc genhtml_branch_coverage=1 00:20:06.714 --rc genhtml_function_coverage=1 00:20:06.714 --rc genhtml_legend=1 00:20:06.714 --rc geninfo_all_blocks=1 00:20:06.714 --rc geninfo_unexecuted_blocks=1 00:20:06.714 00:20:06.714 ' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:06.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.714 --rc genhtml_branch_coverage=1 00:20:06.714 --rc genhtml_function_coverage=1 00:20:06.714 --rc genhtml_legend=1 00:20:06.714 --rc geninfo_all_blocks=1 00:20:06.714 --rc geninfo_unexecuted_blocks=1 00:20:06.714 00:20:06.714 ' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:06.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.714 --rc genhtml_branch_coverage=1 00:20:06.714 --rc genhtml_function_coverage=1 00:20:06.714 --rc genhtml_legend=1 00:20:06.714 --rc geninfo_all_blocks=1 00:20:06.714 --rc geninfo_unexecuted_blocks=1 00:20:06.714 00:20:06.714 ' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:06.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.714 --rc genhtml_branch_coverage=1 00:20:06.714 --rc genhtml_function_coverage=1 00:20:06.714 --rc genhtml_legend=1 00:20:06.714 --rc geninfo_all_blocks=1 00:20:06.714 --rc geninfo_unexecuted_blocks=1 00:20:06.714 00:20:06.714 ' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.714 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.715 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:06.715 Error setting digest 00:20:06.715 40F2D587997F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:06.715 40F2D587997F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.715 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:06.716 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:13.281 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:13.281 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:13.281 Found net devices under 0000:86:00.0: cvl_0_0 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:13.281 Found net devices under 0000:86:00.1: cvl_0_1 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:13.281 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:13.282 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:13.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:20:13.282 00:20:13.282 --- 10.0.0.2 ping statistics --- 00:20:13.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.282 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:20:13.282 00:20:13.282 --- 10.0.0.1 ping statistics --- 00:20:13.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.282 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1719936 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1719936 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1719936 ']' 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.282 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.282 [2024-11-19 10:47:20.191927] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:13.282 [2024-11-19 10:47:20.191978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.282 [2024-11-19 10:47:20.271329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.282 [2024-11-19 10:47:20.311927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.282 [2024-11-19 10:47:20.311968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.282 [2024-11-19 10:47:20.311976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.282 [2024-11-19 10:47:20.311982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.282 [2024-11-19 10:47:20.311987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.282 [2024-11-19 10:47:20.312574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.RDo 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.RDo 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.RDo 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.RDo 00:20:13.850 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:13.850 [2024-11-19 10:47:21.249632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.850 [2024-11-19 10:47:21.265639] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.850 [2024-11-19 10:47:21.265833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.109 malloc0 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1720186 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1720186 /var/tmp/bdevperf.sock 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1720186 ']' 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.109 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:14.109 [2024-11-19 10:47:21.394653] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:14.109 [2024-11-19 10:47:21.394701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720186 ] 00:20:14.109 [2024-11-19 10:47:21.467754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.109 [2024-11-19 10:47:21.508206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.043 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.043 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:15.043 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.RDo 00:20:15.043 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.302 [2024-11-19 10:47:22.601744] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.302 TLSTESTn1 00:20:15.302 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.559 Running I/O for 10 seconds... 00:20:17.430 5354.00 IOPS, 20.91 MiB/s [2024-11-19T09:47:25.814Z] 5402.50 IOPS, 21.10 MiB/s [2024-11-19T09:47:27.188Z] 5417.00 IOPS, 21.16 MiB/s [2024-11-19T09:47:28.123Z] 5433.00 IOPS, 21.22 MiB/s [2024-11-19T09:47:29.058Z] 5409.40 IOPS, 21.13 MiB/s [2024-11-19T09:47:30.002Z] 5372.17 IOPS, 20.99 MiB/s [2024-11-19T09:47:30.936Z] 5376.86 IOPS, 21.00 MiB/s [2024-11-19T09:47:31.872Z] 5373.12 IOPS, 20.99 MiB/s [2024-11-19T09:47:33.249Z] 5391.22 IOPS, 21.06 MiB/s [2024-11-19T09:47:33.249Z] 5389.50 IOPS, 21.05 MiB/s 00:20:25.800 Latency(us) 00:20:25.800 [2024-11-19T09:47:33.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.800 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.800 Verification LBA range: start 0x0 length 0x2000 00:20:25.800 TLSTESTn1 : 10.01 5395.10 21.07 0.00 0.00 23690.44 5242.88 23934.89 00:20:25.800 [2024-11-19T09:47:33.249Z] =================================================================================================================== 00:20:25.800 [2024-11-19T09:47:33.249Z] Total : 5395.10 21.07 0.00 0.00 23690.44 5242.88 23934.89 00:20:25.800 { 00:20:25.800 "results": [ 00:20:25.800 { 00:20:25.800 "job": "TLSTESTn1", 00:20:25.800 "core_mask": "0x4", 00:20:25.800 "workload": "verify", 00:20:25.800 "status": "finished", 00:20:25.800 "verify_range": { 00:20:25.800 "start": 0, 00:20:25.800 "length": 8192 00:20:25.800 }, 00:20:25.800 "queue_depth": 128, 00:20:25.800 "io_size": 4096, 00:20:25.800 "runtime": 10.012975, 00:20:25.800 "iops": 5395.09985793433, 00:20:25.800 "mibps": 21.07460882005598, 00:20:25.800 "io_failed": 0, 00:20:25.800 "io_timeout": 0, 00:20:25.800 "avg_latency_us": 23690.444881096962, 00:20:25.800 "min_latency_us": 5242.88, 00:20:25.800 "max_latency_us": 23934.88695652174 00:20:25.800 } 00:20:25.800 ], 00:20:25.800 "core_count": 1 00:20:25.800 } 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:25.800 nvmf_trace.0 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1720186 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1720186 ']' 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1720186 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720186 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720186' 00:20:25.800 killing process with pid 1720186 00:20:25.800 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1720186 00:20:25.800 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.800 00:20:25.800 Latency(us) 00:20:25.800 [2024-11-19T09:47:33.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.801 [2024-11-19T09:47:33.250Z] =================================================================================================================== 00:20:25.801 [2024-11-19T09:47:33.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.801 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1720186 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:25.801 rmmod nvme_tcp 00:20:25.801 rmmod nvme_fabrics 00:20:25.801 rmmod nvme_keyring 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1719936 ']' 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1719936 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1719936 ']' 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1719936 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.801 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719936 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719936' 00:20:26.060 killing process with pid 1719936 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1719936 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1719936 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.060 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.RDo 00:20:28.598 00:20:28.598 real 0m21.747s 00:20:28.598 user 0m23.490s 00:20:28.598 sys 0m9.758s 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.598 ************************************ 00:20:28.598 END TEST nvmf_fips 00:20:28.598 ************************************ 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.598 ************************************ 00:20:28.598 START TEST nvmf_control_msg_list 00:20:28.598 ************************************ 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:28.598 * Looking for test storage... 00:20:28.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:28.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.598 --rc genhtml_branch_coverage=1 00:20:28.598 --rc genhtml_function_coverage=1 00:20:28.598 --rc genhtml_legend=1 00:20:28.598 --rc geninfo_all_blocks=1 00:20:28.598 --rc geninfo_unexecuted_blocks=1 00:20:28.598 00:20:28.598 ' 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:28.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.598 --rc genhtml_branch_coverage=1 00:20:28.598 --rc genhtml_function_coverage=1 00:20:28.598 --rc genhtml_legend=1 00:20:28.598 --rc geninfo_all_blocks=1 00:20:28.598 --rc geninfo_unexecuted_blocks=1 00:20:28.598 00:20:28.598 ' 00:20:28.598 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:28.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.598 --rc genhtml_branch_coverage=1 00:20:28.598 --rc genhtml_function_coverage=1 00:20:28.598 --rc genhtml_legend=1 00:20:28.598 --rc geninfo_all_blocks=1 00:20:28.599 --rc geninfo_unexecuted_blocks=1 00:20:28.599 00:20:28.599 ' 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:28.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.599 --rc genhtml_branch_coverage=1 00:20:28.599 --rc genhtml_function_coverage=1 00:20:28.599 --rc genhtml_legend=1 00:20:28.599 --rc geninfo_all_blocks=1 00:20:28.599 --rc geninfo_unexecuted_blocks=1 00:20:28.599 00:20:28.599 ' 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.599 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:34.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:34.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.008 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:34.009 Found net devices under 0000:86:00.0: cvl_0_0 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:34.009 Found net devices under 0000:86:00.1: cvl_0_1 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:34.009 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:34.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:20:34.268 00:20:34.268 --- 10.0.0.2 ping statistics --- 00:20:34.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.268 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:20:34.268 00:20:34.268 --- 10.0.0.1 ping statistics --- 00:20:34.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.268 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:34.268 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1725571 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1725571 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1725571 ']' 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.527 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.527 [2024-11-19 10:47:41.779390] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:34.527 [2024-11-19 10:47:41.779441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.527 [2024-11-19 10:47:41.858709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.527 [2024-11-19 10:47:41.900273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.527 [2024-11-19 10:47:41.900309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.527 [2024-11-19 10:47:41.900316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.527 [2024-11-19 10:47:41.900322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.527 [2024-11-19 10:47:41.900328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.527 [2024-11-19 10:47:41.900887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.786 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.786 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:34.786 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.786 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.786 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.786 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.786 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:34.786 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:34.786 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:34.786 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.786 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.786 [2024-11-19 10:47:42.040425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.786 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.787 Malloc0 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.787 [2024-11-19 10:47:42.084851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1725591 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1725593 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1725594 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1725591 00:20:34.787 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.787 [2024-11-19 10:47:42.173543] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:34.787 [2024-11-19 10:47:42.173733] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:34.787 [2024-11-19 10:47:42.173894] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:36.164 Initializing NVMe Controllers 00:20:36.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:36.164 Initialization complete. Launching workers. 00:20:36.164 ======================================================== 00:20:36.164 Latency(us) 00:20:36.164 Device Information : IOPS MiB/s Average min max 00:20:36.164 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40931.69 40651.22 41895.82 00:20:36.164 ======================================================== 00:20:36.164 Total : 25.00 0.10 40931.69 40651.22 41895.82 00:20:36.164 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1725593 00:20:36.164 Initializing NVMe Controllers 00:20:36.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:36.164 Initialization complete. Launching workers. 00:20:36.164 ======================================================== 00:20:36.164 Latency(us) 00:20:36.164 Device Information : IOPS MiB/s Average min max 00:20:36.164 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 73.00 0.29 14111.32 130.73 41840.98 00:20:36.164 ======================================================== 00:20:36.164 Total : 73.00 0.29 14111.32 130.73 41840.98 00:20:36.164 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1725594 00:20:36.164 Initializing NVMe Controllers 00:20:36.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:36.164 Initialization complete. Launching workers. 00:20:36.164 ======================================================== 00:20:36.164 Latency(us) 00:20:36.164 Device Information : IOPS MiB/s Average min max 00:20:36.164 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 152.00 0.59 6574.30 136.56 41031.31 00:20:36.164 ======================================================== 00:20:36.164 Total : 152.00 0.59 6574.30 136.56 41031.31 00:20:36.164 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.164 rmmod nvme_tcp 00:20:36.164 rmmod nvme_fabrics 00:20:36.164 rmmod nvme_keyring 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1725571 ']' 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1725571 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1725571 ']' 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1725571 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1725571 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1725571' 00:20:36.164 killing process with pid 1725571 00:20:36.164 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1725571 00:20:36.165 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1725571 00:20:36.423 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.423 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.424 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.957 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:38.958 00:20:38.958 real 0m10.250s 00:20:38.958 user 0m7.055s 00:20:38.958 sys 0m5.416s 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.958 ************************************ 00:20:38.958 END TEST nvmf_control_msg_list 00:20:38.958 ************************************ 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.958 ************************************ 00:20:38.958 START TEST nvmf_wait_for_buf 00:20:38.958 ************************************ 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:38.958 * Looking for test storage... 00:20:38.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:38.958 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:38.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.958 --rc genhtml_branch_coverage=1 00:20:38.958 --rc genhtml_function_coverage=1 00:20:38.958 --rc genhtml_legend=1 00:20:38.958 --rc geninfo_all_blocks=1 00:20:38.958 --rc geninfo_unexecuted_blocks=1 00:20:38.958 00:20:38.958 ' 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:38.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.958 --rc genhtml_branch_coverage=1 00:20:38.958 --rc genhtml_function_coverage=1 00:20:38.958 --rc genhtml_legend=1 00:20:38.958 --rc geninfo_all_blocks=1 00:20:38.958 --rc geninfo_unexecuted_blocks=1 00:20:38.958 00:20:38.958 ' 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:38.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.958 --rc genhtml_branch_coverage=1 00:20:38.958 --rc genhtml_function_coverage=1 00:20:38.958 --rc genhtml_legend=1 00:20:38.958 --rc geninfo_all_blocks=1 00:20:38.958 --rc geninfo_unexecuted_blocks=1 00:20:38.958 00:20:38.958 ' 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:38.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.958 --rc genhtml_branch_coverage=1 00:20:38.958 --rc genhtml_function_coverage=1 00:20:38.958 --rc genhtml_legend=1 00:20:38.958 --rc geninfo_all_blocks=1 00:20:38.958 --rc geninfo_unexecuted_blocks=1 00:20:38.958 00:20:38.958 ' 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.958 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:38.959 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:45.527 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:45.527 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.527 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:45.528 Found net devices under 0000:86:00.0: cvl_0_0 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:45.528 Found net devices under 0000:86:00.1: cvl_0_1 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.528 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:45.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:20:45.528 00:20:45.528 --- 10.0.0.2 ping statistics --- 00:20:45.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.528 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:20:45.528 00:20:45.528 --- 10.0.0.1 ping statistics --- 00:20:45.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.528 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1729351 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1729351 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1729351 ']' 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.528 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 [2024-11-19 10:47:52.110840] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:20:45.529 [2024-11-19 10:47:52.110883] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.529 [2024-11-19 10:47:52.190944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.529 [2024-11-19 10:47:52.234739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.529 [2024-11-19 10:47:52.234770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.529 [2024-11-19 10:47:52.234778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.529 [2024-11-19 10:47:52.234785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.529 [2024-11-19 10:47:52.234790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.529 [2024-11-19 10:47:52.235350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 Malloc0 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 [2024-11-19 10:47:52.412715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:45.529 [2024-11-19 10:47:52.440886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.529 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.529 [2024-11-19 10:47:52.524045] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:46.905 Initializing NVMe Controllers 00:20:46.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:46.905 Initialization complete. Launching workers. 00:20:46.905 ======================================================== 00:20:46.905 Latency(us) 00:20:46.905 Device Information : IOPS MiB/s Average min max 00:20:46.905 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.89 7274.57 63843.80 00:20:46.905 ======================================================== 00:20:46.905 Total : 129.00 16.12 32238.89 7274.57 63843.80 00:20:46.905 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.905 rmmod nvme_tcp 00:20:46.905 rmmod nvme_fabrics 00:20:46.905 rmmod nvme_keyring 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:46.905 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1729351 ']' 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1729351 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1729351 ']' 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1729351 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729351 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729351' 00:20:46.906 killing process with pid 1729351 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1729351 00:20:46.906 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1729351 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.165 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.065 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:49.065 00:20:49.065 real 0m10.539s 00:20:49.065 user 0m4.047s 00:20:49.065 sys 0m4.948s 00:20:49.065 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.065 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.065 ************************************ 00:20:49.065 END TEST nvmf_wait_for_buf 00:20:49.065 ************************************ 00:20:49.065 10:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:49.066 10:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:49.066 10:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:49.066 10:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:49.066 10:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.066 10:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.630 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.631 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.631 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.631 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.631 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:55.631 ************************************ 00:20:55.631 START TEST nvmf_perf_adq 00:20:55.631 ************************************ 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:55.631 * Looking for test storage... 00:20:55.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:55.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.631 --rc genhtml_branch_coverage=1 00:20:55.631 --rc genhtml_function_coverage=1 00:20:55.631 --rc genhtml_legend=1 00:20:55.631 --rc geninfo_all_blocks=1 00:20:55.631 --rc geninfo_unexecuted_blocks=1 00:20:55.631 00:20:55.631 ' 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:55.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.631 --rc genhtml_branch_coverage=1 00:20:55.631 --rc genhtml_function_coverage=1 00:20:55.631 --rc genhtml_legend=1 00:20:55.631 --rc geninfo_all_blocks=1 00:20:55.631 --rc geninfo_unexecuted_blocks=1 00:20:55.631 00:20:55.631 ' 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:55.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.631 --rc genhtml_branch_coverage=1 00:20:55.631 --rc genhtml_function_coverage=1 00:20:55.631 --rc genhtml_legend=1 00:20:55.631 --rc geninfo_all_blocks=1 00:20:55.631 --rc geninfo_unexecuted_blocks=1 00:20:55.631 00:20:55.631 ' 00:20:55.631 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:55.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.632 --rc genhtml_branch_coverage=1 00:20:55.632 --rc genhtml_function_coverage=1 00:20:55.632 --rc genhtml_legend=1 00:20:55.632 --rc geninfo_all_blocks=1 00:20:55.632 --rc geninfo_unexecuted_blocks=1 00:20:55.632 00:20:55.632 ' 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.632 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:00.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:00.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.907 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:00.908 Found net devices under 0000:86:00.0: cvl_0_0 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:00.908 Found net devices under 0000:86:00.1: cvl_0_1 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:00.908 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:01.844 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:03.746 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:09.020 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:09.020 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:09.020 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:09.020 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:09.020 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:09.020 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:09.021 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:09.021 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:09.021 Found net devices under 0000:86:00.0: cvl_0_0 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:09.021 Found net devices under 0000:86:00.1: cvl_0_1 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:09.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:21:09.021 00:21:09.021 --- 10.0.0.2 ping statistics --- 00:21:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.021 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:21:09.021 00:21:09.021 --- 10.0.0.1 ping statistics --- 00:21:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.021 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:09.021 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1738208 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1738208 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1738208 ']' 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.022 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.022 [2024-11-19 10:48:16.346457] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:09.022 [2024-11-19 10:48:16.346499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.022 [2024-11-19 10:48:16.423211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:09.022 [2024-11-19 10:48:16.466898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.022 [2024-11-19 10:48:16.466935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.022 [2024-11-19 10:48:16.466943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.022 [2024-11-19 10:48:16.466956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.022 [2024-11-19 10:48:16.466962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.022 [2024-11-19 10:48:16.468523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.022 [2024-11-19 10:48:16.468632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.022 [2024-11-19 10:48:16.468736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.022 [2024-11-19 10:48:16.468736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.956 [2024-11-19 10:48:17.364458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.956 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.214 Malloc1 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.214 [2024-11-19 10:48:17.433164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1738458 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:10.214 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:12.116 "tick_rate": 2300000000, 00:21:12.116 "poll_groups": [ 00:21:12.116 { 00:21:12.116 "name": "nvmf_tgt_poll_group_000", 00:21:12.116 "admin_qpairs": 1, 00:21:12.116 "io_qpairs": 1, 00:21:12.116 "current_admin_qpairs": 1, 00:21:12.116 "current_io_qpairs": 1, 00:21:12.116 "pending_bdev_io": 0, 00:21:12.116 "completed_nvme_io": 19279, 00:21:12.116 "transports": [ 00:21:12.116 { 00:21:12.116 "trtype": "TCP" 00:21:12.116 } 00:21:12.116 ] 00:21:12.116 }, 00:21:12.116 { 00:21:12.116 "name": "nvmf_tgt_poll_group_001", 00:21:12.116 "admin_qpairs": 0, 00:21:12.116 "io_qpairs": 1, 00:21:12.116 "current_admin_qpairs": 0, 00:21:12.116 "current_io_qpairs": 1, 00:21:12.116 "pending_bdev_io": 0, 00:21:12.116 "completed_nvme_io": 19471, 00:21:12.116 "transports": [ 00:21:12.116 { 00:21:12.116 "trtype": "TCP" 00:21:12.116 } 00:21:12.116 ] 00:21:12.116 }, 00:21:12.116 { 00:21:12.116 "name": "nvmf_tgt_poll_group_002", 00:21:12.116 "admin_qpairs": 0, 00:21:12.116 "io_qpairs": 1, 00:21:12.116 "current_admin_qpairs": 0, 00:21:12.116 "current_io_qpairs": 1, 00:21:12.116 "pending_bdev_io": 0, 00:21:12.116 "completed_nvme_io": 19279, 00:21:12.116 "transports": [ 00:21:12.116 { 00:21:12.116 "trtype": "TCP" 00:21:12.116 } 00:21:12.116 ] 00:21:12.116 }, 00:21:12.116 { 00:21:12.116 "name": "nvmf_tgt_poll_group_003", 00:21:12.116 "admin_qpairs": 0, 00:21:12.116 "io_qpairs": 1, 00:21:12.116 "current_admin_qpairs": 0, 00:21:12.116 "current_io_qpairs": 1, 00:21:12.116 "pending_bdev_io": 0, 00:21:12.116 "completed_nvme_io": 19027, 00:21:12.116 "transports": [ 00:21:12.116 { 00:21:12.116 "trtype": "TCP" 00:21:12.116 } 00:21:12.116 ] 00:21:12.116 } 00:21:12.116 ] 00:21:12.116 }' 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:12.116 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1738458 00:21:20.234 Initializing NVMe Controllers 00:21:20.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:20.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:20.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:20.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:20.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:20.234 Initialization complete. Launching workers. 00:21:20.234 ======================================================== 00:21:20.234 Latency(us) 00:21:20.234 Device Information : IOPS MiB/s Average min max 00:21:20.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10021.65 39.15 6385.94 1430.29 10542.80 00:21:20.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10298.24 40.23 6215.45 2350.91 13255.53 00:21:20.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10286.94 40.18 6221.21 2059.42 10497.72 00:21:20.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10137.45 39.60 6312.90 2500.43 10886.05 00:21:20.234 ======================================================== 00:21:20.234 Total : 40744.28 159.16 6283.09 1430.29 13255.53 00:21:20.234 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:20.234 rmmod nvme_tcp 00:21:20.234 rmmod nvme_fabrics 00:21:20.234 rmmod nvme_keyring 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1738208 ']' 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1738208 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1738208 ']' 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1738208 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.234 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1738208 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1738208' 00:21:20.494 killing process with pid 1738208 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1738208 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1738208 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.494 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.032 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.032 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:23.032 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:23.032 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:23.968 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:25.875 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:31.145 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:31.145 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.145 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.145 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:31.146 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:31.146 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:31.146 Found net devices under 0000:86:00.0: cvl_0_0 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:31.146 Found net devices under 0000:86:00.1: cvl_0_1 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.146 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:31.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:21:31.147 00:21:31.147 --- 10.0.0.2 ping statistics --- 00:21:31.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.147 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:21:31.147 00:21:31.147 --- 10.0.0.1 ping statistics --- 00:21:31.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.147 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:31.147 net.core.busy_poll = 1 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:31.147 net.core.busy_read = 1 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:31.147 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1742229 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1742229 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1742229 ']' 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.406 [2024-11-19 10:48:38.690700] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:31.406 [2024-11-19 10:48:38.690755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.406 [2024-11-19 10:48:38.769930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.406 [2024-11-19 10:48:38.813053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.406 [2024-11-19 10:48:38.813091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.406 [2024-11-19 10:48:38.813097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.406 [2024-11-19 10:48:38.813104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.406 [2024-11-19 10:48:38.813109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.406 [2024-11-19 10:48:38.814672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.406 [2024-11-19 10:48:38.814781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.406 [2024-11-19 10:48:38.814887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.406 [2024-11-19 10:48:38.814888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.406 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.666 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 [2024-11-19 10:48:39.016613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 Malloc1 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.666 [2024-11-19 10:48:39.077205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1742264 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:31.666 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:34.196 "tick_rate": 2300000000, 00:21:34.196 "poll_groups": [ 00:21:34.196 { 00:21:34.196 "name": "nvmf_tgt_poll_group_000", 00:21:34.196 "admin_qpairs": 1, 00:21:34.196 "io_qpairs": 2, 00:21:34.196 "current_admin_qpairs": 1, 00:21:34.196 "current_io_qpairs": 2, 00:21:34.196 "pending_bdev_io": 0, 00:21:34.196 "completed_nvme_io": 28144, 00:21:34.196 "transports": [ 00:21:34.196 { 00:21:34.196 "trtype": "TCP" 00:21:34.196 } 00:21:34.196 ] 00:21:34.196 }, 00:21:34.196 { 00:21:34.196 "name": "nvmf_tgt_poll_group_001", 00:21:34.196 "admin_qpairs": 0, 00:21:34.196 "io_qpairs": 2, 00:21:34.196 "current_admin_qpairs": 0, 00:21:34.196 "current_io_qpairs": 2, 00:21:34.196 "pending_bdev_io": 0, 00:21:34.196 "completed_nvme_io": 27435, 00:21:34.196 "transports": [ 00:21:34.196 { 00:21:34.196 "trtype": "TCP" 00:21:34.196 } 00:21:34.196 ] 00:21:34.196 }, 00:21:34.196 { 00:21:34.196 "name": "nvmf_tgt_poll_group_002", 00:21:34.196 "admin_qpairs": 0, 00:21:34.196 "io_qpairs": 0, 00:21:34.196 "current_admin_qpairs": 0, 00:21:34.196 "current_io_qpairs": 0, 00:21:34.196 "pending_bdev_io": 0, 00:21:34.196 "completed_nvme_io": 0, 00:21:34.196 "transports": [ 00:21:34.196 { 00:21:34.196 "trtype": "TCP" 00:21:34.196 } 00:21:34.196 ] 00:21:34.196 }, 00:21:34.196 { 00:21:34.196 "name": "nvmf_tgt_poll_group_003", 00:21:34.196 "admin_qpairs": 0, 00:21:34.196 "io_qpairs": 0, 00:21:34.196 "current_admin_qpairs": 0, 00:21:34.196 "current_io_qpairs": 0, 00:21:34.196 "pending_bdev_io": 0, 00:21:34.196 "completed_nvme_io": 0, 00:21:34.196 "transports": [ 00:21:34.196 { 00:21:34.196 "trtype": "TCP" 00:21:34.196 } 00:21:34.196 ] 00:21:34.196 } 00:21:34.196 ] 00:21:34.196 }' 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:34.196 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1742264 00:21:42.320 Initializing NVMe Controllers 00:21:42.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:42.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:42.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:42.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:42.320 Initialization complete. Launching workers. 00:21:42.320 ======================================================== 00:21:42.320 Latency(us) 00:21:42.320 Device Information : IOPS MiB/s Average min max 00:21:42.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7626.00 29.79 8406.41 1602.22 52186.02 00:21:42.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7700.00 30.08 8311.06 1637.08 52309.45 00:21:42.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6877.00 26.86 9305.36 1649.77 53258.03 00:21:42.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7551.10 29.50 8473.92 1444.60 52616.45 00:21:42.320 ======================================================== 00:21:42.320 Total : 29754.10 116.23 8606.64 1444.60 53258.03 00:21:42.320 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.320 rmmod nvme_tcp 00:21:42.320 rmmod nvme_fabrics 00:21:42.320 rmmod nvme_keyring 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1742229 ']' 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1742229 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1742229 ']' 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1742229 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742229 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742229' 00:21:42.320 killing process with pid 1742229 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1742229 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1742229 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.320 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:45.770 00:21:45.770 real 0m50.537s 00:21:45.770 user 2m46.701s 00:21:45.770 sys 0m10.384s 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.770 ************************************ 00:21:45.770 END TEST nvmf_perf_adq 00:21:45.770 ************************************ 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.770 ************************************ 00:21:45.770 START TEST nvmf_shutdown 00:21:45.770 ************************************ 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:45.770 * Looking for test storage... 00:21:45.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.770 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:45.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.770 --rc genhtml_branch_coverage=1 00:21:45.770 --rc genhtml_function_coverage=1 00:21:45.770 --rc genhtml_legend=1 00:21:45.770 --rc geninfo_all_blocks=1 00:21:45.770 --rc geninfo_unexecuted_blocks=1 00:21:45.770 00:21:45.770 ' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.771 --rc genhtml_branch_coverage=1 00:21:45.771 --rc genhtml_function_coverage=1 00:21:45.771 --rc genhtml_legend=1 00:21:45.771 --rc geninfo_all_blocks=1 00:21:45.771 --rc geninfo_unexecuted_blocks=1 00:21:45.771 00:21:45.771 ' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.771 --rc genhtml_branch_coverage=1 00:21:45.771 --rc genhtml_function_coverage=1 00:21:45.771 --rc genhtml_legend=1 00:21:45.771 --rc geninfo_all_blocks=1 00:21:45.771 --rc geninfo_unexecuted_blocks=1 00:21:45.771 00:21:45.771 ' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.771 --rc genhtml_branch_coverage=1 00:21:45.771 --rc genhtml_function_coverage=1 00:21:45.771 --rc genhtml_legend=1 00:21:45.771 --rc geninfo_all_blocks=1 00:21:45.771 --rc geninfo_unexecuted_blocks=1 00:21:45.771 00:21:45.771 ' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.771 ************************************ 00:21:45.771 START TEST nvmf_shutdown_tc1 00:21:45.771 ************************************ 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.771 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:52.344 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:52.344 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:52.344 Found net devices under 0000:86:00.0: cvl_0_0 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:52.344 Found net devices under 0000:86:00.1: cvl_0_1 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.344 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:21:52.345 00:21:52.345 --- 10.0.0.2 ping statistics --- 00:21:52.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.345 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:21:52.345 00:21:52.345 --- 10.0.0.1 ping statistics --- 00:21:52.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.345 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1747710 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1747710 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1747710 ']' 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.345 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.345 [2024-11-19 10:48:59.004544] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:52.345 [2024-11-19 10:48:59.004590] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.345 [2024-11-19 10:48:59.084168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.345 [2024-11-19 10:48:59.126750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.345 [2024-11-19 10:48:59.126789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.345 [2024-11-19 10:48:59.126796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.345 [2024-11-19 10:48:59.126802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.345 [2024-11-19 10:48:59.126807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.345 [2024-11-19 10:48:59.128261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.345 [2024-11-19 10:48:59.128373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.345 [2024-11-19 10:48:59.128478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.345 [2024-11-19 10:48:59.128478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.345 [2024-11-19 10:48:59.277911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.345 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.346 Malloc1 00:21:52.346 [2024-11-19 10:48:59.393000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.346 Malloc2 00:21:52.346 Malloc3 00:21:52.346 Malloc4 00:21:52.346 Malloc5 00:21:52.346 Malloc6 00:21:52.346 Malloc7 00:21:52.346 Malloc8 00:21:52.346 Malloc9 00:21:52.346 Malloc10 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.346 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.606 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1747961 00:21:52.606 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1747961 /var/tmp/bdevperf.sock 00:21:52.606 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1747961 ']' 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.607 [2024-11-19 10:48:59.875660] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:52.607 [2024-11-19 10:48:59.875707] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.607 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.607 { 00:21:52.607 "params": { 00:21:52.607 "name": "Nvme$subsystem", 00:21:52.607 "trtype": "$TEST_TRANSPORT", 00:21:52.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.607 "adrfam": "ipv4", 00:21:52.607 "trsvcid": "$NVMF_PORT", 00:21:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.607 "hdgst": ${hdgst:-false}, 00:21:52.607 "ddgst": ${ddgst:-false} 00:21:52.607 }, 00:21:52.607 "method": "bdev_nvme_attach_controller" 00:21:52.607 } 00:21:52.607 EOF 00:21:52.607 )") 00:21:52.608 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.608 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.608 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.608 { 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme$subsystem", 00:21:52.608 "trtype": "$TEST_TRANSPORT", 00:21:52.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "$NVMF_PORT", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.608 "hdgst": ${hdgst:-false}, 00:21:52.608 "ddgst": ${ddgst:-false} 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 } 00:21:52.608 EOF 00:21:52.608 )") 00:21:52.608 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.608 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:52.608 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:52.608 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme1", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme2", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme3", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme4", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme5", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme6", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme7", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme8", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme9", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 },{ 00:21:52.608 "params": { 00:21:52.608 "name": "Nvme10", 00:21:52.608 "trtype": "tcp", 00:21:52.608 "traddr": "10.0.0.2", 00:21:52.608 "adrfam": "ipv4", 00:21:52.608 "trsvcid": "4420", 00:21:52.608 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:52.608 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:52.608 "hdgst": false, 00:21:52.608 "ddgst": false 00:21:52.608 }, 00:21:52.608 "method": "bdev_nvme_attach_controller" 00:21:52.608 }' 00:21:52.608 [2024-11-19 10:48:59.956517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.608 [2024-11-19 10:48:59.997846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.986 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.986 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:53.986 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:53.986 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.986 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.245 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.245 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1747961 00:21:54.245 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:54.245 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:55.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1747961 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1747710 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.182 { 00:21:55.182 "params": { 00:21:55.182 "name": "Nvme$subsystem", 00:21:55.182 "trtype": "$TEST_TRANSPORT", 00:21:55.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.182 "adrfam": "ipv4", 00:21:55.182 "trsvcid": "$NVMF_PORT", 00:21:55.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.182 "hdgst": ${hdgst:-false}, 00:21:55.182 "ddgst": ${ddgst:-false} 00:21:55.182 }, 00:21:55.182 "method": "bdev_nvme_attach_controller" 00:21:55.182 } 00:21:55.182 EOF 00:21:55.182 )") 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.182 { 00:21:55.182 "params": { 00:21:55.182 "name": "Nvme$subsystem", 00:21:55.182 "trtype": "$TEST_TRANSPORT", 00:21:55.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.182 "adrfam": "ipv4", 00:21:55.182 "trsvcid": "$NVMF_PORT", 00:21:55.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.182 "hdgst": ${hdgst:-false}, 00:21:55.182 "ddgst": ${ddgst:-false} 00:21:55.182 }, 00:21:55.182 "method": "bdev_nvme_attach_controller" 00:21:55.182 } 00:21:55.182 EOF 00:21:55.182 )") 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.182 { 00:21:55.182 "params": { 00:21:55.182 "name": "Nvme$subsystem", 00:21:55.182 "trtype": "$TEST_TRANSPORT", 00:21:55.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.182 "adrfam": "ipv4", 00:21:55.182 "trsvcid": "$NVMF_PORT", 00:21:55.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.182 "hdgst": ${hdgst:-false}, 00:21:55.182 "ddgst": ${ddgst:-false} 00:21:55.182 }, 00:21:55.182 "method": "bdev_nvme_attach_controller" 00:21:55.182 } 00:21:55.182 EOF 00:21:55.182 )") 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.182 { 00:21:55.182 "params": { 00:21:55.182 "name": "Nvme$subsystem", 00:21:55.182 "trtype": "$TEST_TRANSPORT", 00:21:55.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.182 "adrfam": "ipv4", 00:21:55.182 "trsvcid": "$NVMF_PORT", 00:21:55.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.182 "hdgst": ${hdgst:-false}, 00:21:55.182 "ddgst": ${ddgst:-false} 00:21:55.182 }, 00:21:55.182 "method": "bdev_nvme_attach_controller" 00:21:55.182 } 00:21:55.182 EOF 00:21:55.182 )") 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.182 { 00:21:55.182 "params": { 00:21:55.182 "name": "Nvme$subsystem", 00:21:55.182 "trtype": "$TEST_TRANSPORT", 00:21:55.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.182 "adrfam": "ipv4", 00:21:55.182 "trsvcid": "$NVMF_PORT", 00:21:55.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.182 "hdgst": ${hdgst:-false}, 00:21:55.182 "ddgst": ${ddgst:-false} 00:21:55.182 }, 00:21:55.182 "method": "bdev_nvme_attach_controller" 00:21:55.182 } 00:21:55.182 EOF 00:21:55.182 )") 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.182 { 00:21:55.182 "params": { 00:21:55.182 "name": "Nvme$subsystem", 00:21:55.182 "trtype": "$TEST_TRANSPORT", 00:21:55.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.182 "adrfam": "ipv4", 00:21:55.182 "trsvcid": "$NVMF_PORT", 00:21:55.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.182 "hdgst": ${hdgst:-false}, 00:21:55.182 "ddgst": ${ddgst:-false} 00:21:55.182 }, 00:21:55.182 "method": "bdev_nvme_attach_controller" 00:21:55.182 } 00:21:55.182 EOF 00:21:55.182 )") 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.182 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.182 { 00:21:55.182 "params": { 00:21:55.182 "name": "Nvme$subsystem", 00:21:55.182 "trtype": "$TEST_TRANSPORT", 00:21:55.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "$NVMF_PORT", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.183 "hdgst": ${hdgst:-false}, 00:21:55.183 "ddgst": ${ddgst:-false} 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 } 00:21:55.183 EOF 00:21:55.183 )") 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.183 [2024-11-19 10:49:02.496026] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:21:55.183 [2024-11-19 10:49:02.496074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748330 ] 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.183 { 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme$subsystem", 00:21:55.183 "trtype": "$TEST_TRANSPORT", 00:21:55.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "$NVMF_PORT", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.183 "hdgst": ${hdgst:-false}, 00:21:55.183 "ddgst": ${ddgst:-false} 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 } 00:21:55.183 EOF 00:21:55.183 )") 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.183 { 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme$subsystem", 00:21:55.183 "trtype": "$TEST_TRANSPORT", 00:21:55.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "$NVMF_PORT", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.183 "hdgst": ${hdgst:-false}, 00:21:55.183 "ddgst": ${ddgst:-false} 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 } 00:21:55.183 EOF 00:21:55.183 )") 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.183 { 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme$subsystem", 00:21:55.183 "trtype": "$TEST_TRANSPORT", 00:21:55.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "$NVMF_PORT", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.183 "hdgst": ${hdgst:-false}, 00:21:55.183 "ddgst": ${ddgst:-false} 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 } 00:21:55.183 EOF 00:21:55.183 )") 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:55.183 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme1", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme2", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme3", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme4", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme5", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme6", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme7", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme8", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme9", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 },{ 00:21:55.183 "params": { 00:21:55.183 "name": "Nvme10", 00:21:55.183 "trtype": "tcp", 00:21:55.183 "traddr": "10.0.0.2", 00:21:55.183 "adrfam": "ipv4", 00:21:55.183 "trsvcid": "4420", 00:21:55.183 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:55.183 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:55.183 "hdgst": false, 00:21:55.183 "ddgst": false 00:21:55.183 }, 00:21:55.183 "method": "bdev_nvme_attach_controller" 00:21:55.183 }' 00:21:55.183 [2024-11-19 10:49:02.571803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.183 [2024-11-19 10:49:02.613667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.560 Running I/O for 1 seconds... 00:21:57.758 2190.00 IOPS, 136.88 MiB/s 00:21:57.758 Latency(us) 00:21:57.758 [2024-11-19T09:49:05.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.758 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme1n1 : 1.07 239.46 14.97 0.00 0.00 264672.61 19261.89 246187.41 00:21:57.758 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme2n1 : 1.16 276.08 17.26 0.00 0.00 224545.57 16868.40 212450.62 00:21:57.758 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme3n1 : 1.15 277.99 17.37 0.00 0.00 221675.34 15044.79 218833.25 00:21:57.758 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme4n1 : 1.10 299.46 18.72 0.00 0.00 198249.77 20173.69 210627.01 00:21:57.758 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme5n1 : 1.15 277.13 17.32 0.00 0.00 215959.28 16298.52 211538.81 00:21:57.758 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme6n1 : 1.17 276.09 17.26 0.00 0.00 213888.15 16640.45 219745.06 00:21:57.758 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme7n1 : 1.16 274.71 17.17 0.00 0.00 211672.02 13563.10 237069.36 00:21:57.758 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme8n1 : 1.17 273.14 17.07 0.00 0.00 209853.31 10428.77 222480.47 00:21:57.758 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme9n1 : 1.18 271.75 16.98 0.00 0.00 207942.34 16868.40 230686.72 00:21:57.758 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.758 Verification LBA range: start 0x0 length 0x400 00:21:57.758 Nvme10n1 : 1.18 271.10 16.94 0.00 0.00 205399.62 17324.30 242540.19 00:21:57.758 [2024-11-19T09:49:05.207Z] =================================================================================================================== 00:21:57.758 [2024-11-19T09:49:05.207Z] Total : 2736.92 171.06 0.00 0.00 216359.86 10428.77 246187.41 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.018 rmmod nvme_tcp 00:21:58.018 rmmod nvme_fabrics 00:21:58.018 rmmod nvme_keyring 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1747710 ']' 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1747710 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1747710 ']' 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1747710 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1747710 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1747710' 00:21:58.018 killing process with pid 1747710 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1747710 00:21:58.018 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1747710 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.586 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.494 00:22:00.494 real 0m14.858s 00:22:00.494 user 0m31.907s 00:22:00.494 sys 0m5.836s 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.494 ************************************ 00:22:00.494 END TEST nvmf_shutdown_tc1 00:22:00.494 ************************************ 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:00.494 ************************************ 00:22:00.494 START TEST nvmf_shutdown_tc2 00:22:00.494 ************************************ 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.494 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.495 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.495 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.495 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:00.495 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:00.755 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.755 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:00.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:22:00.755 00:22:00.755 --- 10.0.0.2 ping statistics --- 00:22:00.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.755 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:22:00.755 00:22:00.755 --- 10.0.0.1 ping statistics --- 00:22:00.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.755 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.755 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.756 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1749375 00:22:00.756 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1749375 00:22:00.756 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:00.756 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1749375 ']' 00:22:00.756 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.756 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.756 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.756 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.015 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.015 [2024-11-19 10:49:08.250190] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:01.015 [2024-11-19 10:49:08.250234] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.015 [2024-11-19 10:49:08.329061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.015 [2024-11-19 10:49:08.371224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.015 [2024-11-19 10:49:08.371263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.015 [2024-11-19 10:49:08.371270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.015 [2024-11-19 10:49:08.371276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.015 [2024-11-19 10:49:08.371281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.015 [2024-11-19 10:49:08.372992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.015 [2024-11-19 10:49:08.373097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.015 [2024-11-19 10:49:08.373203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.015 [2024-11-19 10:49:08.373204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.275 [2024-11-19 10:49:08.509389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.275 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.275 Malloc1 00:22:01.275 [2024-11-19 10:49:08.620714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.275 Malloc2 00:22:01.275 Malloc3 00:22:01.534 Malloc4 00:22:01.534 Malloc5 00:22:01.534 Malloc6 00:22:01.534 Malloc7 00:22:01.534 Malloc8 00:22:01.534 Malloc9 00:22:01.794 Malloc10 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1749553 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1749553 /var/tmp/bdevperf.sock 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1749553 ']' 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:01.794 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 [2024-11-19 10:49:09.091058] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:01.795 [2024-11-19 10:49:09.091104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749553 ] 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.795 { 00:22:01.795 "params": { 00:22:01.795 "name": "Nvme$subsystem", 00:22:01.795 "trtype": "$TEST_TRANSPORT", 00:22:01.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.795 "adrfam": "ipv4", 00:22:01.795 "trsvcid": "$NVMF_PORT", 00:22:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.795 "hdgst": ${hdgst:-false}, 00:22:01.795 "ddgst": ${ddgst:-false} 00:22:01.795 }, 00:22:01.795 "method": "bdev_nvme_attach_controller" 00:22:01.795 } 00:22:01.795 EOF 00:22:01.795 )") 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:01.795 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:01.796 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme1", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme2", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme3", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme4", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme5", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme6", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme7", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme8", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme9", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 },{ 00:22:01.796 "params": { 00:22:01.796 "name": "Nvme10", 00:22:01.796 "trtype": "tcp", 00:22:01.796 "traddr": "10.0.0.2", 00:22:01.796 "adrfam": "ipv4", 00:22:01.796 "trsvcid": "4420", 00:22:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:01.796 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:01.796 "hdgst": false, 00:22:01.796 "ddgst": false 00:22:01.796 }, 00:22:01.796 "method": "bdev_nvme_attach_controller" 00:22:01.796 }' 00:22:01.796 [2024-11-19 10:49:09.168065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.796 [2024-11-19 10:49:09.209365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.699 Running I/O for 10 seconds... 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:03.699 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:03.700 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1749553 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1749553 ']' 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1749553 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.959 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749553 00:22:04.218 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.218 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.218 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749553' 00:22:04.218 killing process with pid 1749553 00:22:04.218 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1749553 00:22:04.218 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1749553 00:22:04.218 Received shutdown signal, test time was about 0.655719 seconds 00:22:04.218 00:22:04.218 Latency(us) 00:22:04.218 [2024-11-19T09:49:11.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.218 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme1n1 : 0.64 299.32 18.71 0.00 0.00 210207.61 16982.37 215186.03 00:22:04.218 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme2n1 : 0.65 295.91 18.49 0.00 0.00 207792.16 26784.28 184184.65 00:22:04.218 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme3n1 : 0.63 302.59 18.91 0.00 0.00 197505.11 13563.10 216097.84 00:22:04.218 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme4n1 : 0.65 294.37 18.40 0.00 0.00 197755.77 18236.10 221568.67 00:22:04.218 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme5n1 : 0.63 204.70 12.79 0.00 0.00 276056.60 19945.74 230686.72 00:22:04.218 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme6n1 : 0.63 203.59 12.72 0.00 0.00 269690.43 23137.06 242540.19 00:22:04.218 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme7n1 : 0.64 300.27 18.77 0.00 0.00 178239.52 28379.94 202420.76 00:22:04.218 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme8n1 : 0.65 297.29 18.58 0.00 0.00 175031.28 15614.66 200597.15 00:22:04.218 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme9n1 : 0.66 293.11 18.32 0.00 0.00 172848.83 16754.42 217921.45 00:22:04.218 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.218 Verification LBA range: start 0x0 length 0x400 00:22:04.218 Nvme10n1 : 0.62 216.59 13.54 0.00 0.00 219650.59 3134.33 217921.45 00:22:04.218 [2024-11-19T09:49:11.667Z] =================================================================================================================== 00:22:04.218 [2024-11-19T09:49:11.667Z] Total : 2707.73 169.23 0.00 0.00 205565.06 3134.33 242540.19 00:22:04.218 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1749375 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.595 rmmod nvme_tcp 00:22:05.595 rmmod nvme_fabrics 00:22:05.595 rmmod nvme_keyring 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.595 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1749375 ']' 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1749375 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1749375 ']' 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1749375 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749375 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749375' 00:22:05.596 killing process with pid 1749375 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1749375 00:22:05.596 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1749375 00:22:05.854 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.854 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.854 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.854 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:05.854 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:05.855 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.855 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.855 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.855 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.855 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.855 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.855 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.390 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.390 00:22:08.390 real 0m7.320s 00:22:08.390 user 0m21.561s 00:22:08.390 sys 0m1.305s 00:22:08.390 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.390 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.390 ************************************ 00:22:08.390 END TEST nvmf_shutdown_tc2 00:22:08.390 ************************************ 00:22:08.390 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:08.390 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:08.390 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:08.391 ************************************ 00:22:08.391 START TEST nvmf_shutdown_tc3 00:22:08.391 ************************************ 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.391 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.391 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.391 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.391 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.391 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:22:08.392 00:22:08.392 --- 10.0.0.2 ping statistics --- 00:22:08.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.392 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:22:08.392 00:22:08.392 --- 10.0.0.1 ping statistics --- 00:22:08.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.392 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1750815 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1750815 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1750815 ']' 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.392 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.392 [2024-11-19 10:49:15.664648] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:08.392 [2024-11-19 10:49:15.664690] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.392 [2024-11-19 10:49:15.731046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.392 [2024-11-19 10:49:15.774166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.392 [2024-11-19 10:49:15.774204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.392 [2024-11-19 10:49:15.774212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.392 [2024-11-19 10:49:15.774219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.392 [2024-11-19 10:49:15.774224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.392 [2024-11-19 10:49:15.777966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.392 [2024-11-19 10:49:15.778090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.392 [2024-11-19 10:49:15.778196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.392 [2024-11-19 10:49:15.778197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.651 [2024-11-19 10:49:15.923565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.651 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.652 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.652 Malloc1 00:22:08.652 [2024-11-19 10:49:16.029904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.652 Malloc2 00:22:08.652 Malloc3 00:22:08.910 Malloc4 00:22:08.910 Malloc5 00:22:08.910 Malloc6 00:22:08.910 Malloc7 00:22:08.910 Malloc8 00:22:09.170 Malloc9 00:22:09.170 Malloc10 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1750875 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1750875 /var/tmp/bdevperf.sock 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1750875 ']' 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.170 { 00:22:09.170 "params": { 00:22:09.170 "name": "Nvme$subsystem", 00:22:09.170 "trtype": "$TEST_TRANSPORT", 00:22:09.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.170 "adrfam": "ipv4", 00:22:09.170 "trsvcid": "$NVMF_PORT", 00:22:09.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.170 "hdgst": ${hdgst:-false}, 00:22:09.170 "ddgst": ${ddgst:-false} 00:22:09.170 }, 00:22:09.170 "method": "bdev_nvme_attach_controller" 00:22:09.170 } 00:22:09.170 EOF 00:22:09.170 )") 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.170 { 00:22:09.170 "params": { 00:22:09.170 "name": "Nvme$subsystem", 00:22:09.170 "trtype": "$TEST_TRANSPORT", 00:22:09.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.170 "adrfam": "ipv4", 00:22:09.170 "trsvcid": "$NVMF_PORT", 00:22:09.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.170 "hdgst": ${hdgst:-false}, 00:22:09.170 "ddgst": ${ddgst:-false} 00:22:09.170 }, 00:22:09.170 "method": "bdev_nvme_attach_controller" 00:22:09.170 } 00:22:09.170 EOF 00:22:09.170 )") 00:22:09.170 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.171 { 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme$subsystem", 00:22:09.171 "trtype": "$TEST_TRANSPORT", 00:22:09.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "$NVMF_PORT", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.171 "hdgst": ${hdgst:-false}, 00:22:09.171 "ddgst": ${ddgst:-false} 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 } 00:22:09.171 EOF 00:22:09.171 )") 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.171 { 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme$subsystem", 00:22:09.171 "trtype": "$TEST_TRANSPORT", 00:22:09.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "$NVMF_PORT", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.171 "hdgst": ${hdgst:-false}, 00:22:09.171 "ddgst": ${ddgst:-false} 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 } 00:22:09.171 EOF 00:22:09.171 )") 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.171 { 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme$subsystem", 00:22:09.171 "trtype": "$TEST_TRANSPORT", 00:22:09.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "$NVMF_PORT", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.171 "hdgst": ${hdgst:-false}, 00:22:09.171 "ddgst": ${ddgst:-false} 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 } 00:22:09.171 EOF 00:22:09.171 )") 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.171 { 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme$subsystem", 00:22:09.171 "trtype": "$TEST_TRANSPORT", 00:22:09.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "$NVMF_PORT", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.171 "hdgst": ${hdgst:-false}, 00:22:09.171 "ddgst": ${ddgst:-false} 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 } 00:22:09.171 EOF 00:22:09.171 )") 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.171 { 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme$subsystem", 00:22:09.171 "trtype": "$TEST_TRANSPORT", 00:22:09.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "$NVMF_PORT", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.171 "hdgst": ${hdgst:-false}, 00:22:09.171 "ddgst": ${ddgst:-false} 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 } 00:22:09.171 EOF 00:22:09.171 )") 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 [2024-11-19 10:49:16.513445] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:09.171 [2024-11-19 10:49:16.513489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750875 ] 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.171 { 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme$subsystem", 00:22:09.171 "trtype": "$TEST_TRANSPORT", 00:22:09.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "$NVMF_PORT", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.171 "hdgst": ${hdgst:-false}, 00:22:09.171 "ddgst": ${ddgst:-false} 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 } 00:22:09.171 EOF 00:22:09.171 )") 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.171 { 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme$subsystem", 00:22:09.171 "trtype": "$TEST_TRANSPORT", 00:22:09.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "$NVMF_PORT", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.171 "hdgst": ${hdgst:-false}, 00:22:09.171 "ddgst": ${ddgst:-false} 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 } 00:22:09.171 EOF 00:22:09.171 )") 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.171 { 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme$subsystem", 00:22:09.171 "trtype": "$TEST_TRANSPORT", 00:22:09.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "$NVMF_PORT", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.171 "hdgst": ${hdgst:-false}, 00:22:09.171 "ddgst": ${ddgst:-false} 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 } 00:22:09.171 EOF 00:22:09.171 )") 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:09.171 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme1", 00:22:09.171 "trtype": "tcp", 00:22:09.171 "traddr": "10.0.0.2", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "4420", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.171 "hdgst": false, 00:22:09.171 "ddgst": false 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 },{ 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme2", 00:22:09.171 "trtype": "tcp", 00:22:09.171 "traddr": "10.0.0.2", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "4420", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:09.171 "hdgst": false, 00:22:09.171 "ddgst": false 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 },{ 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme3", 00:22:09.171 "trtype": "tcp", 00:22:09.171 "traddr": "10.0.0.2", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "4420", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:09.171 "hdgst": false, 00:22:09.171 "ddgst": false 00:22:09.171 }, 00:22:09.171 "method": "bdev_nvme_attach_controller" 00:22:09.171 },{ 00:22:09.171 "params": { 00:22:09.171 "name": "Nvme4", 00:22:09.171 "trtype": "tcp", 00:22:09.171 "traddr": "10.0.0.2", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "4420", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:09.171 "hdgst": false, 00:22:09.171 "ddgst": false 00:22:09.171 }, 00:22:09.172 "method": "bdev_nvme_attach_controller" 00:22:09.172 },{ 00:22:09.172 "params": { 00:22:09.172 "name": "Nvme5", 00:22:09.172 "trtype": "tcp", 00:22:09.172 "traddr": "10.0.0.2", 00:22:09.172 "adrfam": "ipv4", 00:22:09.172 "trsvcid": "4420", 00:22:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:09.172 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:09.172 "hdgst": false, 00:22:09.172 "ddgst": false 00:22:09.172 }, 00:22:09.172 "method": "bdev_nvme_attach_controller" 00:22:09.172 },{ 00:22:09.172 "params": { 00:22:09.172 "name": "Nvme6", 00:22:09.172 "trtype": "tcp", 00:22:09.172 "traddr": "10.0.0.2", 00:22:09.172 "adrfam": "ipv4", 00:22:09.172 "trsvcid": "4420", 00:22:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:09.172 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:09.172 "hdgst": false, 00:22:09.172 "ddgst": false 00:22:09.172 }, 00:22:09.172 "method": "bdev_nvme_attach_controller" 00:22:09.172 },{ 00:22:09.172 "params": { 00:22:09.172 "name": "Nvme7", 00:22:09.172 "trtype": "tcp", 00:22:09.172 "traddr": "10.0.0.2", 00:22:09.172 "adrfam": "ipv4", 00:22:09.172 "trsvcid": "4420", 00:22:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:09.172 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:09.172 "hdgst": false, 00:22:09.172 "ddgst": false 00:22:09.172 }, 00:22:09.172 "method": "bdev_nvme_attach_controller" 00:22:09.172 },{ 00:22:09.172 "params": { 00:22:09.172 "name": "Nvme8", 00:22:09.172 "trtype": "tcp", 00:22:09.172 "traddr": "10.0.0.2", 00:22:09.172 "adrfam": "ipv4", 00:22:09.172 "trsvcid": "4420", 00:22:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:09.172 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:09.172 "hdgst": false, 00:22:09.172 "ddgst": false 00:22:09.172 }, 00:22:09.172 "method": "bdev_nvme_attach_controller" 00:22:09.172 },{ 00:22:09.172 "params": { 00:22:09.172 "name": "Nvme9", 00:22:09.172 "trtype": "tcp", 00:22:09.172 "traddr": "10.0.0.2", 00:22:09.172 "adrfam": "ipv4", 00:22:09.172 "trsvcid": "4420", 00:22:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:09.172 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:09.172 "hdgst": false, 00:22:09.172 "ddgst": false 00:22:09.172 }, 00:22:09.172 "method": "bdev_nvme_attach_controller" 00:22:09.172 },{ 00:22:09.172 "params": { 00:22:09.172 "name": "Nvme10", 00:22:09.172 "trtype": "tcp", 00:22:09.172 "traddr": "10.0.0.2", 00:22:09.172 "adrfam": "ipv4", 00:22:09.172 "trsvcid": "4420", 00:22:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:09.172 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:09.172 "hdgst": false, 00:22:09.172 "ddgst": false 00:22:09.172 }, 00:22:09.172 "method": "bdev_nvme_attach_controller" 00:22:09.172 }' 00:22:09.172 [2024-11-19 10:49:16.591238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.431 [2024-11-19 10:49:16.633485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.809 Running I/O for 10 seconds... 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:11.071 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1750815 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1750815 ']' 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1750815 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.330 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750815 00:22:11.604 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:11.604 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:11.604 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750815' 00:22:11.604 killing process with pid 1750815 00:22:11.604 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1750815 00:22:11.604 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1750815 00:22:11.604 [2024-11-19 10:49:18.820880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65180 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.820956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65180 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.820969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65180 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.820976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65180 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.822999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.604 [2024-11-19 10:49:18.823156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.823285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a630c0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.605 [2024-11-19 10:49:18.824748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.824830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a635b0 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.606 [2024-11-19 10:49:18.825989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.825996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63930 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.826996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.827342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e00 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.828356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.828369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.828377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.828383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.828390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.607 [2024-11-19 10:49:18.828396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.828770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a642d0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.608 [2024-11-19 10:49:18.829660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.829973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647c0 is same with the state(6) to be set 00:22:11.609 [2024-11-19 10:49:18.836663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.609 [2024-11-19 10:49:18.836695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.609 [2024-11-19 10:49:18.836712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.609 [2024-11-19 10:49:18.836720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.609 [2024-11-19 10:49:18.836728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.609 [2024-11-19 10:49:18.836736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.609 [2024-11-19 10:49:18.836744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.609 [2024-11-19 10:49:18.836750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.609 [2024-11-19 10:49:18.836758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.609 [2024-11-19 10:49:18.836765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.836988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.836996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.610 [2024-11-19 10:49:18.837267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.610 [2024-11-19 10:49:18.837275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.611 [2024-11-19 10:49:18.837644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.837673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.611 [2024-11-19 10:49:18.838031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.611 [2024-11-19 10:49:18.838052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.838060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.611 [2024-11-19 10:49:18.838067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.838074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.611 [2024-11-19 10:49:18.838080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.838087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.611 [2024-11-19 10:49:18.838094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.838100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1d50 is same with the state(6) to be set 00:22:11.611 [2024-11-19 10:49:18.838125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.611 [2024-11-19 10:49:18.838134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.611 [2024-11-19 10:49:18.838141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.611 [2024-11-19 10:49:18.838147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaef970 is same with the state(6) to be set 00:22:11.612 [2024-11-19 10:49:18.838203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4cca0 is same with the state(6) to be set 00:22:11.612 [2024-11-19 10:49:18.838287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bdf0 is same with the state(6) to be set 00:22:11.612 [2024-11-19 10:49:18.838362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61950 is same with the state(6) to be set 00:22:11.612 [2024-11-19 10:49:18.838439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf15670 is same with the state(6) to be set 00:22:11.612 [2024-11-19 10:49:18.838519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06610 is same with the state(6) to be set 00:22:11.612 [2024-11-19 10:49:18.838597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.612 [2024-11-19 10:49:18.838651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.612 [2024-11-19 10:49:18.838657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf15b40 is same with the state(6) to be set 00:22:11.612 [2024-11-19 10:49:18.838686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.613 [2024-11-19 10:49:18.838694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.838701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.613 [2024-11-19 10:49:18.838708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.838720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.613 [2024-11-19 10:49:18.838727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.838734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.613 [2024-11-19 10:49:18.838741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.838747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf21b0 is same with the state(6) to be set 00:22:11.613 [2024-11-19 10:49:18.838770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.613 [2024-11-19 10:49:18.838778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.838786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.613 [2024-11-19 10:49:18.838792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.838799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.613 [2024-11-19 10:49:18.838806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.838813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.613 [2024-11-19 10:49:18.838819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.838825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1c6a0 is same with the state(6) to be set 00:22:11.613 [2024-11-19 10:49:18.839365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.613 [2024-11-19 10:49:18.839744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.613 [2024-11-19 10:49:18.839753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.839759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.839767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.839774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.839781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.839788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.839797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.839803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.839811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.839819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.839833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.848967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.848981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.848990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.848998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.614 [2024-11-19 10:49:18.849338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.614 [2024-11-19 10:49:18.849347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:11.615 [2024-11-19 10:49:18.849603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.615 [2024-11-19 10:49:18.849908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.615 [2024-11-19 10:49:18.849914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.849922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.849929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.849937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.849943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.849979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.849986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.849994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.616 [2024-11-19 10:49:18.850590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.616 [2024-11-19 10:49:18.850601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.850772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.850784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7660 is same with the state(6) to be set 00:22:11.617 [2024-11-19 10:49:18.852522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.852984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.852993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.853004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.853013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.853024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.853033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.617 [2024-11-19 10:49:18.853044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.617 [2024-11-19 10:49:18.853055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.618 [2024-11-19 10:49:18.853765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.618 [2024-11-19 10:49:18.853776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.853785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.853795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.853804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.853819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.853828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.853839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.853848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.853978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf1d50 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaef970 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4cca0 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bdf0 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61950 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf15670 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa06610 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf15b40 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf21b0 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.854143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1c6a0 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.858183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:11.619 [2024-11-19 10:49:18.858223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:11.619 [2024-11-19 10:49:18.858918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:11.619 [2024-11-19 10:49:18.858956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:11.619 [2024-11-19 10:49:18.859269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.619 [2024-11-19 10:49:18.859289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bdf0 with addr=10.0.0.2, port=4420 00:22:11.619 [2024-11-19 10:49:18.859300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bdf0 is same with the state(6) to be set 00:22:11.619 [2024-11-19 10:49:18.859532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.619 [2024-11-19 10:49:18.859547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf21b0 with addr=10.0.0.2, port=4420 00:22:11.619 [2024-11-19 10:49:18.859557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf21b0 is same with the state(6) to be set 00:22:11.619 [2024-11-19 10:49:18.860198] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:11.619 [2024-11-19 10:49:18.860247] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:11.619 [2024-11-19 10:49:18.860292] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:11.619 [2024-11-19 10:49:18.860336] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:11.619 [2024-11-19 10:49:18.860379] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:11.619 [2024-11-19 10:49:18.860423] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:11.619 [2024-11-19 10:49:18.860899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.619 [2024-11-19 10:49:18.860914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf1d50 with addr=10.0.0.2, port=4420 00:22:11.619 [2024-11-19 10:49:18.860923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1d50 is same with the state(6) to be set 00:22:11.619 [2024-11-19 10:49:18.861208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.619 [2024-11-19 10:49:18.861220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf61950 with addr=10.0.0.2, port=4420 00:22:11.619 [2024-11-19 10:49:18.861227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61950 is same with the state(6) to be set 00:22:11.619 [2024-11-19 10:49:18.861238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bdf0 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.861249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf21b0 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.861342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf1d50 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.861355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61950 (9): Bad file descriptor 00:22:11.619 [2024-11-19 10:49:18.861363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:11.619 [2024-11-19 10:49:18.861370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:11.619 [2024-11-19 10:49:18.861379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:11.619 [2024-11-19 10:49:18.861388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:11.619 [2024-11-19 10:49:18.861395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:11.619 [2024-11-19 10:49:18.861401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:11.619 [2024-11-19 10:49:18.861408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:11.619 [2024-11-19 10:49:18.861415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:11.619 [2024-11-19 10:49:18.861461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:11.619 [2024-11-19 10:49:18.861468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:11.619 [2024-11-19 10:49:18.861475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:11.619 [2024-11-19 10:49:18.861481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:11.619 [2024-11-19 10:49:18.861488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:11.619 [2024-11-19 10:49:18.861493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:11.619 [2024-11-19 10:49:18.861500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:11.619 [2024-11-19 10:49:18.861506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:11.619 [2024-11-19 10:49:18.864089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.864104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.864116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.864127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.864136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.864143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.864151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.864158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.864166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.864172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.864180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.864187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.864195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.864201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.864209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.619 [2024-11-19 10:49:18.864216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.619 [2024-11-19 10:49:18.864224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.620 [2024-11-19 10:49:18.864655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.620 [2024-11-19 10:49:18.864662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.864989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.864995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.865003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.865010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.865018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.865024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.865032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.865039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.865047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.865053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.865062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedf350 is same with the state(6) to be set 00:22:11.621 [2024-11-19 10:49:18.866122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.866133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.866144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.866150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.866159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.866166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.866174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.866181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.866189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.866197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.866205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.866211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.866220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.621 [2024-11-19 10:49:18.866226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.621 [2024-11-19 10:49:18.866235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.622 [2024-11-19 10:49:18.866694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.622 [2024-11-19 10:49:18.866701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.866991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.866998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.867006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.867013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.867020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.867027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.867035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.867042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.867050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.867059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.867066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.867073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.867080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec700 is same with the state(6) to be set 00:22:11.623 [2024-11-19 10:49:18.868088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.868100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.868111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.868118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.868125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.868132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.868141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.868147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.868155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.868162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.868170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.868176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.868184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.868191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.623 [2024-11-19 10:49:18.868199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.623 [2024-11-19 10:49:18.868206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.624 [2024-11-19 10:49:18.868717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.624 [2024-11-19 10:49:18.868724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.868987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.868995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.869004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.869012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.869019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.869027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.869034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.869042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.869049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.869056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef9ac0 is same with the state(6) to be set 00:22:11.625 [2024-11-19 10:49:18.870065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.870088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.870104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.870118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.870134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.870148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.870163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.870178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.625 [2024-11-19 10:49:18.870192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.625 [2024-11-19 10:49:18.870202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.626 [2024-11-19 10:49:18.870701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.626 [2024-11-19 10:49:18.870708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.870989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.870995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.871004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.871013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.871022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.871028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.871036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5770 is same with the state(6) to be set 00:22:11.627 [2024-11-19 10:49:18.872056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.627 [2024-11-19 10:49:18.872253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.627 [2024-11-19 10:49:18.872262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.628 [2024-11-19 10:49:18.872781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.628 [2024-11-19 10:49:18.872789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.872989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.872995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.873003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.873010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.873018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.873024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.873032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf33d0 is same with the state(6) to be set 00:22:11.629 [2024-11-19 10:49:18.874045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.629 [2024-11-19 10:49:18.874348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.629 [2024-11-19 10:49:18.874356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.630 [2024-11-19 10:49:18.874797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.630 [2024-11-19 10:49:18.874803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.874989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.874997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.631 [2024-11-19 10:49:18.875004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.631 [2024-11-19 10:49:18.875011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e41000 is same with the state(6) to be set 00:22:11.631 [2024-11-19 10:49:18.875992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:11.631 [2024-11-19 10:49:18.876011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:11.631 [2024-11-19 10:49:18.876020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:11.631 [2024-11-19 10:49:18.876029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:11.631 [2024-11-19 10:49:18.876101] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:11.631 [2024-11-19 10:49:18.876115] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:11.631 [2024-11-19 10:49:18.876176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:11.631 task offset: 27264 on job bdev=Nvme9n1 fails 00:22:11.631 00:22:11.631 Latency(us) 00:22:11.631 [2024-11-19T09:49:19.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.631 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme1n1 ended in about 0.92 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme1n1 : 0.92 207.88 12.99 69.29 0.00 228525.19 19489.84 220656.86 00:22:11.631 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme2n1 ended in about 0.92 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme2n1 : 0.92 207.61 12.98 69.20 0.00 224861.94 19603.81 248011.02 00:22:11.631 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme3n1 ended in about 0.93 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme3n1 : 0.93 205.47 12.84 68.49 0.00 223263.28 13791.05 215186.03 00:22:11.631 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme4n1 ended in about 0.94 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme4n1 : 0.94 205.03 12.81 68.34 0.00 219819.19 13677.08 223392.28 00:22:11.631 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme5n1 ended in about 0.94 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme5n1 : 0.94 208.86 13.05 68.20 0.00 213027.95 14930.81 223392.28 00:22:11.631 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme6n1 ended in about 0.94 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme6n1 : 0.94 204.17 12.76 68.06 0.00 212883.14 23934.89 222480.47 00:22:11.631 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme7n1 ended in about 0.94 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme7n1 : 0.94 203.74 12.73 67.91 0.00 209490.81 17096.35 218833.25 00:22:11.631 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme8n1 ended in about 0.94 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme8n1 : 0.94 203.31 12.71 67.77 0.00 205943.10 15614.66 218833.25 00:22:11.631 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme9n1 ended in about 0.92 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme9n1 : 0.92 208.54 13.03 69.51 0.00 196089.32 14816.83 221568.67 00:22:11.631 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.631 Job: Nvme10n1 ended in about 0.93 seconds with error 00:22:11.631 Verification LBA range: start 0x0 length 0x400 00:22:11.631 Nvme10n1 : 0.93 207.25 12.95 69.08 0.00 193539.12 18805.98 220656.86 00:22:11.631 [2024-11-19T09:49:19.080Z] =================================================================================================================== 00:22:11.631 [2024-11-19T09:49:19.080Z] Total : 2061.86 128.87 685.87 0.00 212744.75 13677.08 248011.02 00:22:11.631 [2024-11-19 10:49:18.907636] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:11.631 [2024-11-19 10:49:18.907685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:11.631 [2024-11-19 10:49:18.908000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.631 [2024-11-19 10:49:18.908018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef970 with addr=10.0.0.2, port=4420 00:22:11.631 [2024-11-19 10:49:18.908029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaef970 is same with the state(6) to be set 00:22:11.631 [2024-11-19 10:49:18.908266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.631 [2024-11-19 10:49:18.908284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1c6a0 with addr=10.0.0.2, port=4420 00:22:11.631 [2024-11-19 10:49:18.908291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1c6a0 is same with the state(6) to be set 00:22:11.631 [2024-11-19 10:49:18.908507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.632 [2024-11-19 10:49:18.908518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf15b40 with addr=10.0.0.2, port=4420 00:22:11.632 [2024-11-19 10:49:18.908525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf15b40 is same with the state(6) to be set 00:22:11.632 [2024-11-19 10:49:18.908684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.632 [2024-11-19 10:49:18.908695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf15670 with addr=10.0.0.2, port=4420 00:22:11.632 [2024-11-19 10:49:18.908702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf15670 is same with the state(6) to be set 00:22:11.632 [2024-11-19 10:49:18.910095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:11.632 [2024-11-19 10:49:18.910113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:11.632 [2024-11-19 10:49:18.910123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:11.632 [2024-11-19 10:49:18.910131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:11.632 [2024-11-19 10:49:18.910383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.632 [2024-11-19 10:49:18.910398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa06610 with addr=10.0.0.2, port=4420 00:22:11.632 [2024-11-19 10:49:18.910407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06610 is same with the state(6) to be set 00:22:11.632 [2024-11-19 10:49:18.910541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.632 [2024-11-19 10:49:18.910552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4cca0 with addr=10.0.0.2, port=4420 00:22:11.632 [2024-11-19 10:49:18.910559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4cca0 is same with the state(6) to be set 00:22:11.632 [2024-11-19 10:49:18.910572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaef970 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.910583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1c6a0 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.910592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf15b40 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.910601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf15670 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.910635] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:11.632 [2024-11-19 10:49:18.910647] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:11.632 [2024-11-19 10:49:18.910656] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:11.632 [2024-11-19 10:49:18.910666] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:11.632 [2024-11-19 10:49:18.910989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.632 [2024-11-19 10:49:18.911003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf21b0 with addr=10.0.0.2, port=4420 00:22:11.632 [2024-11-19 10:49:18.911011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf21b0 is same with the state(6) to be set 00:22:11.632 [2024-11-19 10:49:18.911251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.632 [2024-11-19 10:49:18.911263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bdf0 with addr=10.0.0.2, port=4420 00:22:11.632 [2024-11-19 10:49:18.911270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bdf0 is same with the state(6) to be set 00:22:11.632 [2024-11-19 10:49:18.911486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.632 [2024-11-19 10:49:18.911497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf61950 with addr=10.0.0.2, port=4420 00:22:11.632 [2024-11-19 10:49:18.911504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61950 is same with the state(6) to be set 00:22:11.632 [2024-11-19 10:49:18.911642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:11.632 [2024-11-19 10:49:18.911653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf1d50 with addr=10.0.0.2, port=4420 00:22:11.632 [2024-11-19 10:49:18.911660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1d50 is same with the state(6) to be set 00:22:11.632 [2024-11-19 10:49:18.911669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa06610 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.911678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4cca0 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.911685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.911692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.911700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.911708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:11.632 [2024-11-19 10:49:18.911715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.911721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.911728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.911734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:11.632 [2024-11-19 10:49:18.911741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.911746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.911752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.911758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:11.632 [2024-11-19 10:49:18.911765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.911771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.911777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.911782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:11.632 [2024-11-19 10:49:18.911857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf21b0 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.911868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bdf0 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.911880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61950 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.911888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf1d50 (9): Bad file descriptor 00:22:11.632 [2024-11-19 10:49:18.911895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.911901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.911908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.911914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:11.632 [2024-11-19 10:49:18.911921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.911927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.911933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.911939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:11.632 [2024-11-19 10:49:18.911967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.911974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.911981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.911987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:11.632 [2024-11-19 10:49:18.911993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.912000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.912007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.912013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:11.632 [2024-11-19 10:49:18.912019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:11.632 [2024-11-19 10:49:18.912025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:11.632 [2024-11-19 10:49:18.912031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:11.632 [2024-11-19 10:49:18.912038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:11.633 [2024-11-19 10:49:18.912045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:11.633 [2024-11-19 10:49:18.912051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:11.633 [2024-11-19 10:49:18.912057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:11.633 [2024-11-19 10:49:18.912063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:11.892 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1750875 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1750875 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1750875 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.829 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.829 rmmod nvme_tcp 00:22:12.829 rmmod nvme_fabrics 00:22:12.829 rmmod nvme_keyring 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1750815 ']' 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1750815 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1750815 ']' 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1750815 00:22:13.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1750815) - No such process 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1750815 is not found' 00:22:13.088 Process with pid 1750815 is not found 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.088 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.089 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.138 00:22:15.138 real 0m7.072s 00:22:15.138 user 0m16.120s 00:22:15.138 sys 0m1.304s 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.138 ************************************ 00:22:15.138 END TEST nvmf_shutdown_tc3 00:22:15.138 ************************************ 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:15.138 ************************************ 00:22:15.138 START TEST nvmf_shutdown_tc4 00:22:15.138 ************************************ 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:15.138 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:15.138 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.138 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:15.139 Found net devices under 0000:86:00.0: cvl_0_0 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:15.139 Found net devices under 0000:86:00.1: cvl_0_1 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.139 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:22:15.398 00:22:15.398 --- 10.0.0.2 ping statistics --- 00:22:15.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.398 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:22:15.398 00:22:15.398 --- 10.0.0.1 ping statistics --- 00:22:15.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.398 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.398 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1752121 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1752121 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1752121 ']' 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.399 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.399 [2024-11-19 10:49:22.841039] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:15.399 [2024-11-19 10:49:22.841095] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.658 [2024-11-19 10:49:22.918980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.658 [2024-11-19 10:49:22.961181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.658 [2024-11-19 10:49:22.961222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.658 [2024-11-19 10:49:22.961228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.658 [2024-11-19 10:49:22.961234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.658 [2024-11-19 10:49:22.961240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.658 [2024-11-19 10:49:22.962746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.658 [2024-11-19 10:49:22.962852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.658 [2024-11-19 10:49:22.962974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.658 [2024-11-19 10:49:22.962976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:16.226 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.226 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:16.226 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.226 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.226 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.485 [2024-11-19 10:49:23.718262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.485 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.485 Malloc1 00:22:16.485 [2024-11-19 10:49:23.825244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.485 Malloc2 00:22:16.485 Malloc3 00:22:16.485 Malloc4 00:22:16.744 Malloc5 00:22:16.744 Malloc6 00:22:16.744 Malloc7 00:22:16.744 Malloc8 00:22:16.744 Malloc9 00:22:17.003 Malloc10 00:22:17.003 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.003 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:17.003 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.003 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:17.003 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1752420 00:22:17.003 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:17.003 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:17.003 [2024-11-19 10:49:24.335120] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1752121 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1752121 ']' 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1752121 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752121 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752121' 00:22:22.288 killing process with pid 1752121 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1752121 00:22:22.288 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1752121 00:22:22.288 [2024-11-19 10:49:29.325294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12fc0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.325344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12fc0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.325352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12fc0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.325359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12fc0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.325366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12fc0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.325372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12fc0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.325379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12fc0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.326804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13960 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.326832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13960 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.326839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13960 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.326847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13960 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.326854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13960 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.326861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13960 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.327844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12af0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.327870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12af0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.327879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12af0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.327886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12af0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.327893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12af0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.327900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12af0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 [2024-11-19 10:49:29.330562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a147d0 is same with the state(6) to be set 00:22:22.288 Write completed with error (sct=0, sc=8) 00:22:22.288 Write completed with error (sct=0, sc=8) 00:22:22.288 Write completed with error (sct=0, sc=8) 00:22:22.288 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 [2024-11-19 10:49:29.333590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cf10 is same with the state(6) to be set 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 [2024-11-19 10:49:29.333613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cf10 is same with the state(6) to be set 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 [2024-11-19 10:49:29.333621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197cf10 is same with the state(6) to be set 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 [2024-11-19 10:49:29.334040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.289 [2024-11-19 10:49:29.334169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d400 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.334197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d400 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.334208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d400 is same with the state(6) to be set 00:22:22.289 starting I/O failed: -6 00:22:22.289 starting I/O failed: -6 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 [2024-11-19 10:49:29.335230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 [2024-11-19 10:49:29.335253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.335262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 [2024-11-19 10:49:29.335270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.335277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 [2024-11-19 10:49:29.335284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 starting I/O failed: -6 00:22:22.289 [2024-11-19 10:49:29.335290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.335297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 [2024-11-19 10:49:29.335303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 starting I/O failed: -6 00:22:22.289 [2024-11-19 10:49:29.335309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.335316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 [2024-11-19 10:49:29.335323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.335331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.335337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with Write completed with error (sct=0, sc=8) 00:22:22.289 the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.335344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 [2024-11-19 10:49:29.335350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c570 is same with the state(6) to be set 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 [2024-11-19 10:49:29.335617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.289 Write completed with error (sct=0, sc=8) 00:22:22.289 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 [2024-11-19 10:49:29.336981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a2e0 is same with the state(6) to be set 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 [2024-11-19 10:49:29.337005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a2e0 is same with the state(6) to be set 00:22:22.290 [2024-11-19 10:49:29.337012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a2e0 is same with the state(6) to be set 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 [2024-11-19 10:49:29.337019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a2e0 is same with the state(6) to be set 00:22:22.290 starting I/O failed: -6 00:22:22.290 [2024-11-19 10:49:29.337026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a2e0 is same with the state(6) to be set 00:22:22.290 [2024-11-19 10:49:29.337033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a2e0 is same with the state(6) to be set 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 [2024-11-19 10:49:29.337372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a7d0 is same with the state(6) to be set 00:22:22.290 starting I/O failed: -6 00:22:22.290 [2024-11-19 10:49:29.337392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a7d0 is same with the state(6) to be set 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 [2024-11-19 10:49:29.337400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a7d0 is same with the state(6) to be set 00:22:22.290 starting I/O failed: -6 00:22:22.290 [2024-11-19 10:49:29.337407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a7d0 is same with the state(6) to be set 00:22:22.290 [2024-11-19 10:49:29.337414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a7d0 is same with the state(6) to be set 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 [2024-11-19 10:49:29.337420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a7d0 is same with the state(6) to be set 00:22:22.290 starting I/O failed: -6 00:22:22.290 [2024-11-19 10:49:29.337427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a7d0 is same with the state(6) to be set 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.290 [2024-11-19 10:49:29.337819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197acc0 is same with Write completed with error (sct=0, sc=8) 00:22:22.290 the state(6) to be set 00:22:22.290 starting I/O failed: -6 00:22:22.290 [2024-11-19 10:49:29.337840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197acc0 is same with the state(6) to be set 00:22:22.290 [2024-11-19 10:49:29.337848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197acc0 is same with the state(6) to be set 00:22:22.290 [2024-11-19 10:49:29.337855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197acc0 is same with the state(6) to be set 00:22:22.290 [2024-11-19 10:49:29.337862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197acc0 is same with the state(6) to be set 00:22:22.290 Write completed with error (sct=0, sc=8) 00:22:22.290 starting I/O failed: -6 00:22:22.291 [2024-11-19 10:49:29.338020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.291 NVMe io qpair process completion error 00:22:22.291 [2024-11-19 10:49:29.338150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.338166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.338172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.338179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.338186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.338192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.338198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.338204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.338210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901de0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.339397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901590 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.340648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ddc0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.340669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ddc0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.340677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ddc0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.340683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ddc0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.340690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ddc0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.340696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ddc0 is same with the state(6) to be set 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 [2024-11-19 10:49:29.341240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197e760 is same with Write completed with error (sct=0, sc=8) 00:22:22.291 the state(6) to be set 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 [2024-11-19 10:49:29.341308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d8f0 is same with the state(6) to be set 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 [2024-11-19 10:49:29.341328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d8f0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.341336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d8f0 is same with the state(6) to be set 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 [2024-11-19 10:49:29.341343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d8f0 is same with starting I/O failed: -6 00:22:22.291 the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.341351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d8f0 is same with the state(6) to be set 00:22:22.291 [2024-11-19 10:49:29.341357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d8f0 is same with the state(6) to be set 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 [2024-11-19 10:49:29.341559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 [2024-11-19 10:49:29.342431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.291 starting I/O failed: -6 00:22:22.291 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 [2024-11-19 10:49:29.343472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.292 Write completed with error (sct=0, sc=8) 00:22:22.292 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 [2024-11-19 10:49:29.345026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.293 NVMe io qpair process completion error 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 [2024-11-19 10:49:29.345605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a117b0 is same with the state(6) to be set 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 [2024-11-19 10:49:29.345626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a117b0 is same with the state(6) to be set 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 [2024-11-19 10:49:29.345633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a117b0 is same with the state(6) to be set 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 [2024-11-19 10:49:29.346064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 [2024-11-19 10:49:29.346996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.293 starting I/O failed: -6 00:22:22.293 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 [2024-11-19 10:49:29.347997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.294 starting I/O failed: -6 00:22:22.294 starting I/O failed: -6 00:22:22.294 starting I/O failed: -6 00:22:22.294 starting I/O failed: -6 00:22:22.294 starting I/O failed: -6 00:22:22.294 starting I/O failed: -6 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 [2024-11-19 10:49:29.350316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.294 NVMe io qpair process completion error 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 starting I/O failed: -6 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.294 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 [2024-11-19 10:49:29.351339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 [2024-11-19 10:49:29.352249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 [2024-11-19 10:49:29.353258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.295 starting I/O failed: -6 00:22:22.295 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 [2024-11-19 10:49:29.355222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.296 NVMe io qpair process completion error 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 [2024-11-19 10:49:29.356355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.296 starting I/O failed: -6 00:22:22.296 starting I/O failed: -6 00:22:22.296 starting I/O failed: -6 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 [2024-11-19 10:49:29.357405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 Write completed with error (sct=0, sc=8) 00:22:22.296 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 [2024-11-19 10:49:29.358433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.297 starting I/O failed: -6 00:22:22.297 starting I/O failed: -6 00:22:22.297 starting I/O failed: -6 00:22:22.297 starting I/O failed: -6 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 [2024-11-19 10:49:29.362604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.297 NVMe io qpair process completion error 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 starting I/O failed: -6 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.297 Write completed with error (sct=0, sc=8) 00:22:22.298 [2024-11-19 10:49:29.363613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 [2024-11-19 10:49:29.364509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 [2024-11-19 10:49:29.365568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.298 Write completed with error (sct=0, sc=8) 00:22:22.298 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 [2024-11-19 10:49:29.368736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.299 NVMe io qpair process completion error 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 [2024-11-19 10:49:29.369842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 [2024-11-19 10:49:29.370659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.299 starting I/O failed: -6 00:22:22.299 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 [2024-11-19 10:49:29.371709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 [2024-11-19 10:49:29.373298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.300 NVMe io qpair process completion error 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 starting I/O failed: -6 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.300 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 [2024-11-19 10:49:29.374304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 [2024-11-19 10:49:29.375167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 [2024-11-19 10:49:29.376244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.301 starting I/O failed: -6 00:22:22.301 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 [2024-11-19 10:49:29.379514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.302 NVMe io qpair process completion error 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 [2024-11-19 10:49:29.380515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 [2024-11-19 10:49:29.381326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.302 Write completed with error (sct=0, sc=8) 00:22:22.302 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 [2024-11-19 10:49:29.382394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.303 Write completed with error (sct=0, sc=8) 00:22:22.303 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 [2024-11-19 10:49:29.388281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.304 NVMe io qpair process completion error 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 [2024-11-19 10:49:29.390628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.304 starting I/O failed: -6 00:22:22.304 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 Write completed with error (sct=0, sc=8) 00:22:22.305 starting I/O failed: -6 00:22:22.305 [2024-11-19 10:49:29.392490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.305 NVMe io qpair process completion error 00:22:22.305 Initializing NVMe Controllers 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:22.305 Controller IO queue size 128, less than required. 00:22:22.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:22.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:22.305 Initialization complete. Launching workers. 00:22:22.305 ======================================================== 00:22:22.305 Latency(us) 00:22:22.305 Device Information : IOPS MiB/s Average min max 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2195.78 94.35 58299.13 736.39 113441.19 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2150.45 92.40 59539.38 714.04 128925.79 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2168.67 93.18 59072.80 804.87 106038.71 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2171.05 93.29 58370.50 869.36 104245.63 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2161.51 92.88 59276.66 607.37 124236.48 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2107.93 90.58 60112.04 728.84 101572.80 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2108.58 90.60 60106.64 723.55 99726.04 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2111.62 90.73 60037.79 958.13 102738.04 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2152.18 92.48 58927.78 492.25 98292.30 00:22:22.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2162.81 92.93 58678.68 673.44 109321.44 00:22:22.305 ======================================================== 00:22:22.305 Total : 21490.59 923.42 59234.00 492.25 128925.79 00:22:22.305 00:22:22.305 [2024-11-19 10:49:29.395440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc20bc0 is same with the state(6) to be set 00:22:22.305 [2024-11-19 10:49:29.395481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc20890 is same with the state(6) to be set 00:22:22.305 [2024-11-19 10:49:29.395511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc21410 is same with the state(6) to be set 00:22:22.306 [2024-11-19 10:49:29.395540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc22ae0 is same with the state(6) to be set 00:22:22.306 [2024-11-19 10:49:29.395569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc20ef0 is same with the state(6) to be set 00:22:22.306 [2024-11-19 10:49:29.395597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc20560 is same with the state(6) to be set 00:22:22.306 [2024-11-19 10:49:29.395624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc21a70 is same with the state(6) to be set 00:22:22.306 [2024-11-19 10:49:29.395651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc22720 is same with the state(6) to be set 00:22:22.306 [2024-11-19 10:49:29.395678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc22900 is same with the state(6) to be set 00:22:22.306 [2024-11-19 10:49:29.395706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc21740 is same with the state(6) to be set 00:22:22.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:22.306 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1752420 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1752420 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1752420 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:23.685 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.686 rmmod nvme_tcp 00:22:23.686 rmmod nvme_fabrics 00:22:23.686 rmmod nvme_keyring 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1752121 ']' 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1752121 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1752121 ']' 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1752121 00:22:23.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1752121) - No such process 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1752121 is not found' 00:22:23.686 Process with pid 1752121 is not found 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.686 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.593 00:22:25.593 real 0m10.415s 00:22:25.593 user 0m27.667s 00:22:25.593 sys 0m5.080s 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.593 ************************************ 00:22:25.593 END TEST nvmf_shutdown_tc4 00:22:25.593 ************************************ 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:25.593 00:22:25.593 real 0m40.178s 00:22:25.593 user 1m37.507s 00:22:25.593 sys 0m13.820s 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.593 ************************************ 00:22:25.593 END TEST nvmf_shutdown 00:22:25.593 ************************************ 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:25.593 ************************************ 00:22:25.593 START TEST nvmf_nsid 00:22:25.593 ************************************ 00:22:25.593 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:25.853 * Looking for test storage... 00:22:25.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.853 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:25.853 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:25.853 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:25.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.854 --rc genhtml_branch_coverage=1 00:22:25.854 --rc genhtml_function_coverage=1 00:22:25.854 --rc genhtml_legend=1 00:22:25.854 --rc geninfo_all_blocks=1 00:22:25.854 --rc geninfo_unexecuted_blocks=1 00:22:25.854 00:22:25.854 ' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:25.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.854 --rc genhtml_branch_coverage=1 00:22:25.854 --rc genhtml_function_coverage=1 00:22:25.854 --rc genhtml_legend=1 00:22:25.854 --rc geninfo_all_blocks=1 00:22:25.854 --rc geninfo_unexecuted_blocks=1 00:22:25.854 00:22:25.854 ' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:25.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.854 --rc genhtml_branch_coverage=1 00:22:25.854 --rc genhtml_function_coverage=1 00:22:25.854 --rc genhtml_legend=1 00:22:25.854 --rc geninfo_all_blocks=1 00:22:25.854 --rc geninfo_unexecuted_blocks=1 00:22:25.854 00:22:25.854 ' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:25.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.854 --rc genhtml_branch_coverage=1 00:22:25.854 --rc genhtml_function_coverage=1 00:22:25.854 --rc genhtml_legend=1 00:22:25.854 --rc geninfo_all_blocks=1 00:22:25.854 --rc geninfo_unexecuted_blocks=1 00:22:25.854 00:22:25.854 ' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:25.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:25.854 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.855 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:32.428 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.428 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:32.429 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:32.429 Found net devices under 0000:86:00.0: cvl_0_0 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:32.429 Found net devices under 0000:86:00.1: cvl_0_1 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.429 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:22:32.429 00:22:32.429 --- 10.0.0.2 ping statistics --- 00:22:32.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.429 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:22:32.429 00:22:32.429 --- 10.0.0.1 ping statistics --- 00:22:32.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.429 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1756882 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1756882 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1756882 ']' 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.429 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.429 [2024-11-19 10:49:39.149152] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:32.429 [2024-11-19 10:49:39.149197] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.429 [2024-11-19 10:49:39.228892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.429 [2024-11-19 10:49:39.270053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.429 [2024-11-19 10:49:39.270090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.429 [2024-11-19 10:49:39.270097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.429 [2024-11-19 10:49:39.270103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.430 [2024-11-19 10:49:39.270108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.430 [2024-11-19 10:49:39.270653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1756907 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5c96f9b8-0046-42f6-9c64-2374b2db83b6 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=77bcd9e5-2475-4607-9ff4-7e30c0ca6714 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=b5be39d0-8b65-40fb-bb56-765769e061ed 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.430 null0 00:22:32.430 null1 00:22:32.430 null2 00:22:32.430 [2024-11-19 10:49:39.454022] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:32.430 [2024-11-19 10:49:39.454067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756907 ] 00:22:32.430 [2024-11-19 10:49:39.457496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.430 [2024-11-19 10:49:39.481693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1756907 /var/tmp/tgt2.sock 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1756907 ']' 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:32.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.430 [2024-11-19 10:49:39.525865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.430 [2024-11-19 10:49:39.571891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:32.430 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:32.690 [2024-11-19 10:49:40.095387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.690 [2024-11-19 10:49:40.111537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:32.949 nvme0n1 nvme0n2 00:22:32.949 nvme1n1 00:22:32.949 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:32.949 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:32.949 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:33.886 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5c96f9b8-0046-42f6-9c64-2374b2db83b6 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:34.824 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5c96f9b8004642f69c642374b2db83b6 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5C96F9B8004642F69C642374B2DB83B6 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5C96F9B8004642F69C642374B2DB83B6 == \5\C\9\6\F\9\B\8\0\0\4\6\4\2\F\6\9\C\6\4\2\3\7\4\B\2\D\B\8\3\B\6 ]] 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 77bcd9e5-2475-4607-9ff4-7e30c0ca6714 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=77bcd9e5247546079ff47e30c0ca6714 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 77BCD9E5247546079FF47E30C0CA6714 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 77BCD9E5247546079FF47E30C0CA6714 == \7\7\B\C\D\9\E\5\2\4\7\5\4\6\0\7\9\F\F\4\7\E\3\0\C\0\C\A\6\7\1\4 ]] 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid b5be39d0-8b65-40fb-bb56-765769e061ed 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b5be39d08b6540fbbb56765769e061ed 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B5BE39D08B6540FBBB56765769E061ED 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ B5BE39D08B6540FBBB56765769E061ED == \B\5\B\E\3\9\D\0\8\B\6\5\4\0\F\B\B\B\5\6\7\6\5\7\6\9\E\0\6\1\E\D ]] 00:22:35.084 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1756907 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1756907 ']' 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1756907 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1756907 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1756907' 00:22:35.344 killing process with pid 1756907 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1756907 00:22:35.344 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1756907 00:22:35.603 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:35.603 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.604 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:35.604 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.604 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:35.604 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.604 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.604 rmmod nvme_tcp 00:22:35.604 rmmod nvme_fabrics 00:22:35.604 rmmod nvme_keyring 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1756882 ']' 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1756882 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1756882 ']' 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1756882 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1756882 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1756882' 00:22:35.863 killing process with pid 1756882 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1756882 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1756882 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.863 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.401 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:38.401 00:22:38.401 real 0m12.363s 00:22:38.401 user 0m9.702s 00:22:38.401 sys 0m5.470s 00:22:38.401 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.401 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:38.401 ************************************ 00:22:38.401 END TEST nvmf_nsid 00:22:38.401 ************************************ 00:22:38.401 10:49:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:38.401 00:22:38.401 real 12m1.165s 00:22:38.401 user 25m46.122s 00:22:38.401 sys 3m44.621s 00:22:38.401 10:49:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.401 10:49:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.401 ************************************ 00:22:38.401 END TEST nvmf_target_extra 00:22:38.401 ************************************ 00:22:38.401 10:49:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:38.401 10:49:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.401 10:49:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.401 10:49:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.401 ************************************ 00:22:38.401 START TEST nvmf_host 00:22:38.401 ************************************ 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:38.401 * Looking for test storage... 00:22:38.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.401 --rc genhtml_branch_coverage=1 00:22:38.401 --rc genhtml_function_coverage=1 00:22:38.401 --rc genhtml_legend=1 00:22:38.401 --rc geninfo_all_blocks=1 00:22:38.401 --rc geninfo_unexecuted_blocks=1 00:22:38.401 00:22:38.401 ' 00:22:38.401 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.401 --rc genhtml_branch_coverage=1 00:22:38.401 --rc genhtml_function_coverage=1 00:22:38.401 --rc genhtml_legend=1 00:22:38.402 --rc geninfo_all_blocks=1 00:22:38.402 --rc geninfo_unexecuted_blocks=1 00:22:38.402 00:22:38.402 ' 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.402 --rc genhtml_branch_coverage=1 00:22:38.402 --rc genhtml_function_coverage=1 00:22:38.402 --rc genhtml_legend=1 00:22:38.402 --rc geninfo_all_blocks=1 00:22:38.402 --rc geninfo_unexecuted_blocks=1 00:22:38.402 00:22:38.402 ' 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.402 --rc genhtml_branch_coverage=1 00:22:38.402 --rc genhtml_function_coverage=1 00:22:38.402 --rc genhtml_legend=1 00:22:38.402 --rc geninfo_all_blocks=1 00:22:38.402 --rc geninfo_unexecuted_blocks=1 00:22:38.402 00:22:38.402 ' 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.402 ************************************ 00:22:38.402 START TEST nvmf_multicontroller 00:22:38.402 ************************************ 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:38.402 * Looking for test storage... 00:22:38.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.402 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.663 --rc genhtml_branch_coverage=1 00:22:38.663 --rc genhtml_function_coverage=1 00:22:38.663 --rc genhtml_legend=1 00:22:38.663 --rc geninfo_all_blocks=1 00:22:38.663 --rc geninfo_unexecuted_blocks=1 00:22:38.663 00:22:38.663 ' 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.663 --rc genhtml_branch_coverage=1 00:22:38.663 --rc genhtml_function_coverage=1 00:22:38.663 --rc genhtml_legend=1 00:22:38.663 --rc geninfo_all_blocks=1 00:22:38.663 --rc geninfo_unexecuted_blocks=1 00:22:38.663 00:22:38.663 ' 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.663 --rc genhtml_branch_coverage=1 00:22:38.663 --rc genhtml_function_coverage=1 00:22:38.663 --rc genhtml_legend=1 00:22:38.663 --rc geninfo_all_blocks=1 00:22:38.663 --rc geninfo_unexecuted_blocks=1 00:22:38.663 00:22:38.663 ' 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.663 --rc genhtml_branch_coverage=1 00:22:38.663 --rc genhtml_function_coverage=1 00:22:38.663 --rc genhtml_legend=1 00:22:38.663 --rc geninfo_all_blocks=1 00:22:38.663 --rc geninfo_unexecuted_blocks=1 00:22:38.663 00:22:38.663 ' 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.663 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.664 10:49:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:45.235 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:45.235 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:45.235 Found net devices under 0000:86:00.0: cvl_0_0 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:45.235 Found net devices under 0000:86:00.1: cvl_0_1 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:22:45.235 00:22:45.235 --- 10.0.0.2 ping statistics --- 00:22:45.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.235 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:22:45.235 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:22:45.235 00:22:45.235 --- 10.0.0.1 ping statistics --- 00:22:45.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.236 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1761214 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1761214 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1761214 ']' 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.236 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 [2024-11-19 10:49:51.856326] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:45.236 [2024-11-19 10:49:51.856379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.236 [2024-11-19 10:49:51.937193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.236 [2024-11-19 10:49:51.981014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.236 [2024-11-19 10:49:51.981050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.236 [2024-11-19 10:49:51.981058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.236 [2024-11-19 10:49:51.981064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.236 [2024-11-19 10:49:51.981069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.236 [2024-11-19 10:49:51.982573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.236 [2024-11-19 10:49:51.982679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.236 [2024-11-19 10:49:51.982680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 [2024-11-19 10:49:52.119484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 Malloc0 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 [2024-11-19 10:49:52.186073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 [2024-11-19 10:49:52.194010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 Malloc1 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1761236 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1761236 /var/tmp/bdevperf.sock 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1761236 ']' 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.236 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:45.237 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:45.237 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.237 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.496 NVMe0n1 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.496 1 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.496 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.496 request: 00:22:45.496 { 00:22:45.496 "name": "NVMe0", 00:22:45.496 "trtype": "tcp", 00:22:45.496 "traddr": "10.0.0.2", 00:22:45.496 "adrfam": "ipv4", 00:22:45.496 "trsvcid": "4420", 00:22:45.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.496 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:45.496 "hostaddr": "10.0.0.1", 00:22:45.496 "prchk_reftag": false, 00:22:45.496 "prchk_guard": false, 00:22:45.496 "hdgst": false, 00:22:45.496 "ddgst": false, 00:22:45.496 "allow_unrecognized_csi": false, 00:22:45.496 "method": "bdev_nvme_attach_controller", 00:22:45.496 "req_id": 1 00:22:45.496 } 00:22:45.496 Got JSON-RPC error response 00:22:45.496 response: 00:22:45.496 { 00:22:45.496 "code": -114, 00:22:45.497 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:45.497 } 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 request: 00:22:45.497 { 00:22:45.497 "name": "NVMe0", 00:22:45.497 "trtype": "tcp", 00:22:45.497 "traddr": "10.0.0.2", 00:22:45.497 "adrfam": "ipv4", 00:22:45.497 "trsvcid": "4420", 00:22:45.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.497 "hostaddr": "10.0.0.1", 00:22:45.497 "prchk_reftag": false, 00:22:45.497 "prchk_guard": false, 00:22:45.497 "hdgst": false, 00:22:45.497 "ddgst": false, 00:22:45.497 "allow_unrecognized_csi": false, 00:22:45.497 "method": "bdev_nvme_attach_controller", 00:22:45.497 "req_id": 1 00:22:45.497 } 00:22:45.497 Got JSON-RPC error response 00:22:45.497 response: 00:22:45.497 { 00:22:45.497 "code": -114, 00:22:45.497 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:45.497 } 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 request: 00:22:45.497 { 00:22:45.497 "name": "NVMe0", 00:22:45.497 "trtype": "tcp", 00:22:45.497 "traddr": "10.0.0.2", 00:22:45.497 "adrfam": "ipv4", 00:22:45.497 "trsvcid": "4420", 00:22:45.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.497 "hostaddr": "10.0.0.1", 00:22:45.497 "prchk_reftag": false, 00:22:45.497 "prchk_guard": false, 00:22:45.497 "hdgst": false, 00:22:45.497 "ddgst": false, 00:22:45.497 "multipath": "disable", 00:22:45.497 "allow_unrecognized_csi": false, 00:22:45.497 "method": "bdev_nvme_attach_controller", 00:22:45.497 "req_id": 1 00:22:45.497 } 00:22:45.497 Got JSON-RPC error response 00:22:45.497 response: 00:22:45.497 { 00:22:45.497 "code": -114, 00:22:45.497 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:45.497 } 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.497 request: 00:22:45.497 { 00:22:45.497 "name": "NVMe0", 00:22:45.497 "trtype": "tcp", 00:22:45.497 "traddr": "10.0.0.2", 00:22:45.497 "adrfam": "ipv4", 00:22:45.497 "trsvcid": "4420", 00:22:45.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.497 "hostaddr": "10.0.0.1", 00:22:45.497 "prchk_reftag": false, 00:22:45.497 "prchk_guard": false, 00:22:45.497 "hdgst": false, 00:22:45.497 "ddgst": false, 00:22:45.497 "multipath": "failover", 00:22:45.497 "allow_unrecognized_csi": false, 00:22:45.497 "method": "bdev_nvme_attach_controller", 00:22:45.497 "req_id": 1 00:22:45.497 } 00:22:45.497 Got JSON-RPC error response 00:22:45.497 response: 00:22:45.497 { 00:22:45.497 "code": -114, 00:22:45.497 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:45.497 } 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.497 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.756 NVMe0n1 00:22:45.756 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.756 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.756 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.757 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.757 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:45.757 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.135 { 00:22:47.135 "results": [ 00:22:47.135 { 00:22:47.135 "job": "NVMe0n1", 00:22:47.135 "core_mask": "0x1", 00:22:47.135 "workload": "write", 00:22:47.135 "status": "finished", 00:22:47.135 "queue_depth": 128, 00:22:47.135 "io_size": 4096, 00:22:47.135 "runtime": 1.004878, 00:22:47.135 "iops": 24340.26817185768, 00:22:47.135 "mibps": 95.07917254631906, 00:22:47.135 "io_failed": 0, 00:22:47.135 "io_timeout": 0, 00:22:47.135 "avg_latency_us": 5252.307490547624, 00:22:47.135 "min_latency_us": 3219.8121739130434, 00:22:47.135 "max_latency_us": 14132.980869565217 00:22:47.135 } 00:22:47.135 ], 00:22:47.135 "core_count": 1 00:22:47.135 } 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1761236 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1761236 ']' 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1761236 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761236 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761236' 00:22:47.135 killing process with pid 1761236 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1761236 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1761236 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:47.135 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:47.136 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:47.136 [2024-11-19 10:49:52.297723] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:47.136 [2024-11-19 10:49:52.297769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761236 ] 00:22:47.136 [2024-11-19 10:49:52.373146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.136 [2024-11-19 10:49:52.414564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.136 [2024-11-19 10:49:53.119657] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name 80621461-b636-4417-93f9-29c777c2f63d already exists 00:22:47.136 [2024-11-19 10:49:53.119685] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:80621461-b636-4417-93f9-29c777c2f63d alias for bdev NVMe1n1 00:22:47.136 [2024-11-19 10:49:53.119693] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:47.136 Running I/O for 1 seconds... 00:22:47.136 24331.00 IOPS, 95.04 MiB/s 00:22:47.136 Latency(us) 00:22:47.136 [2024-11-19T09:49:54.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.136 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:47.136 NVMe0n1 : 1.00 24340.27 95.08 0.00 0.00 5252.31 3219.81 14132.98 00:22:47.136 [2024-11-19T09:49:54.585Z] =================================================================================================================== 00:22:47.136 [2024-11-19T09:49:54.585Z] Total : 24340.27 95.08 0.00 0.00 5252.31 3219.81 14132.98 00:22:47.136 Received shutdown signal, test time was about 1.000000 seconds 00:22:47.136 00:22:47.136 Latency(us) 00:22:47.136 [2024-11-19T09:49:54.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.136 [2024-11-19T09:49:54.585Z] =================================================================================================================== 00:22:47.136 [2024-11-19T09:49:54.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.136 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.136 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.136 rmmod nvme_tcp 00:22:47.136 rmmod nvme_fabrics 00:22:47.136 rmmod nvme_keyring 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1761214 ']' 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1761214 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1761214 ']' 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1761214 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761214 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761214' 00:22:47.395 killing process with pid 1761214 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1761214 00:22:47.395 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1761214 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.655 10:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.561 10:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.561 00:22:49.561 real 0m11.219s 00:22:49.561 user 0m12.609s 00:22:49.561 sys 0m5.198s 00:22:49.561 10:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.561 10:49:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.561 ************************************ 00:22:49.561 END TEST nvmf_multicontroller 00:22:49.561 ************************************ 00:22:49.561 10:49:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:49.561 10:49:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.561 10:49:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.561 10:49:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.561 ************************************ 00:22:49.561 START TEST nvmf_aer 00:22:49.562 ************************************ 00:22:49.562 10:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:49.821 * Looking for test storage... 00:22:49.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.821 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.822 --rc genhtml_branch_coverage=1 00:22:49.822 --rc genhtml_function_coverage=1 00:22:49.822 --rc genhtml_legend=1 00:22:49.822 --rc geninfo_all_blocks=1 00:22:49.822 --rc geninfo_unexecuted_blocks=1 00:22:49.822 00:22:49.822 ' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.822 --rc genhtml_branch_coverage=1 00:22:49.822 --rc genhtml_function_coverage=1 00:22:49.822 --rc genhtml_legend=1 00:22:49.822 --rc geninfo_all_blocks=1 00:22:49.822 --rc geninfo_unexecuted_blocks=1 00:22:49.822 00:22:49.822 ' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.822 --rc genhtml_branch_coverage=1 00:22:49.822 --rc genhtml_function_coverage=1 00:22:49.822 --rc genhtml_legend=1 00:22:49.822 --rc geninfo_all_blocks=1 00:22:49.822 --rc geninfo_unexecuted_blocks=1 00:22:49.822 00:22:49.822 ' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.822 --rc genhtml_branch_coverage=1 00:22:49.822 --rc genhtml_function_coverage=1 00:22:49.822 --rc genhtml_legend=1 00:22:49.822 --rc geninfo_all_blocks=1 00:22:49.822 --rc geninfo_unexecuted_blocks=1 00:22:49.822 00:22:49.822 ' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.822 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.823 10:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.396 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.396 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.396 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.396 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.396 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.396 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.396 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:56.397 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:56.397 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:56.397 Found net devices under 0000:86:00.0: cvl_0_0 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:56.397 Found net devices under 0000:86:00.1: cvl_0_1 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.397 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:56.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:22:56.397 00:22:56.397 --- 10.0.0.2 ping statistics --- 00:22:56.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.397 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:22:56.397 00:22:56.397 --- 10.0.0.1 ping statistics --- 00:22:56.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.397 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1765183 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:56.397 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1765183 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1765183 ']' 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 [2024-11-19 10:50:03.221459] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:22:56.398 [2024-11-19 10:50:03.221511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.398 [2024-11-19 10:50:03.301055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.398 [2024-11-19 10:50:03.344994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.398 [2024-11-19 10:50:03.345034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.398 [2024-11-19 10:50:03.345041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.398 [2024-11-19 10:50:03.345047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.398 [2024-11-19 10:50:03.345052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.398 [2024-11-19 10:50:03.346670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.398 [2024-11-19 10:50:03.346777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.398 [2024-11-19 10:50:03.346884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.398 [2024-11-19 10:50:03.346884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 [2024-11-19 10:50:03.484431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 Malloc0 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 [2024-11-19 10:50:03.550676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 [ 00:22:56.398 { 00:22:56.398 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:56.398 "subtype": "Discovery", 00:22:56.398 "listen_addresses": [], 00:22:56.398 "allow_any_host": true, 00:22:56.398 "hosts": [] 00:22:56.398 }, 00:22:56.398 { 00:22:56.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.398 "subtype": "NVMe", 00:22:56.398 "listen_addresses": [ 00:22:56.398 { 00:22:56.398 "trtype": "TCP", 00:22:56.398 "adrfam": "IPv4", 00:22:56.398 "traddr": "10.0.0.2", 00:22:56.398 "trsvcid": "4420" 00:22:56.398 } 00:22:56.398 ], 00:22:56.398 "allow_any_host": true, 00:22:56.398 "hosts": [], 00:22:56.398 "serial_number": "SPDK00000000000001", 00:22:56.398 "model_number": "SPDK bdev Controller", 00:22:56.398 "max_namespaces": 2, 00:22:56.398 "min_cntlid": 1, 00:22:56.398 "max_cntlid": 65519, 00:22:56.398 "namespaces": [ 00:22:56.398 { 00:22:56.398 "nsid": 1, 00:22:56.398 "bdev_name": "Malloc0", 00:22:56.398 "name": "Malloc0", 00:22:56.398 "nguid": "4E23D003B9CE42578CDE61ABD35E60FC", 00:22:56.398 "uuid": "4e23d003-b9ce-4257-8cde-61abd35e60fc" 00:22:56.398 } 00:22:56.398 ] 00:22:56.398 } 00:22:56.398 ] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1765254 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 Malloc1 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.398 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.658 Asynchronous Event Request test 00:22:56.658 Attaching to 10.0.0.2 00:22:56.658 Attached to 10.0.0.2 00:22:56.658 Registering asynchronous event callbacks... 00:22:56.658 Starting namespace attribute notice tests for all controllers... 00:22:56.658 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:56.658 aer_cb - Changed Namespace 00:22:56.658 Cleaning up... 00:22:56.658 [ 00:22:56.658 { 00:22:56.658 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:56.658 "subtype": "Discovery", 00:22:56.658 "listen_addresses": [], 00:22:56.658 "allow_any_host": true, 00:22:56.658 "hosts": [] 00:22:56.658 }, 00:22:56.658 { 00:22:56.658 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.658 "subtype": "NVMe", 00:22:56.658 "listen_addresses": [ 00:22:56.658 { 00:22:56.658 "trtype": "TCP", 00:22:56.658 "adrfam": "IPv4", 00:22:56.658 "traddr": "10.0.0.2", 00:22:56.658 "trsvcid": "4420" 00:22:56.658 } 00:22:56.658 ], 00:22:56.658 "allow_any_host": true, 00:22:56.658 "hosts": [], 00:22:56.658 "serial_number": "SPDK00000000000001", 00:22:56.658 "model_number": "SPDK bdev Controller", 00:22:56.658 "max_namespaces": 2, 00:22:56.658 "min_cntlid": 1, 00:22:56.658 "max_cntlid": 65519, 00:22:56.658 "namespaces": [ 00:22:56.658 { 00:22:56.658 "nsid": 1, 00:22:56.658 "bdev_name": "Malloc0", 00:22:56.658 "name": "Malloc0", 00:22:56.658 "nguid": "4E23D003B9CE42578CDE61ABD35E60FC", 00:22:56.658 "uuid": "4e23d003-b9ce-4257-8cde-61abd35e60fc" 00:22:56.658 }, 00:22:56.658 { 00:22:56.658 "nsid": 2, 00:22:56.658 "bdev_name": "Malloc1", 00:22:56.658 "name": "Malloc1", 00:22:56.658 "nguid": "BF0048DCA71342708357B8B9D5F198D7", 00:22:56.658 "uuid": "bf0048dc-a713-4270-8357-b8b9d5f198d7" 00:22:56.658 } 00:22:56.658 ] 00:22:56.658 } 00:22:56.658 ] 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1765254 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.658 rmmod nvme_tcp 00:22:56.658 rmmod nvme_fabrics 00:22:56.658 rmmod nvme_keyring 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1765183 ']' 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1765183 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1765183 ']' 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1765183 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.658 10:50:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1765183 00:22:56.658 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.658 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.658 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1765183' 00:22:56.658 killing process with pid 1765183 00:22:56.658 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1765183 00:22:56.658 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1765183 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.918 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.823 10:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.082 00:22:59.082 real 0m9.284s 00:22:59.082 user 0m5.131s 00:22:59.082 sys 0m4.934s 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:59.082 ************************************ 00:22:59.082 END TEST nvmf_aer 00:22:59.082 ************************************ 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.082 ************************************ 00:22:59.082 START TEST nvmf_async_init 00:22:59.082 ************************************ 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:59.082 * Looking for test storage... 00:22:59.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.082 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:59.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.083 --rc genhtml_branch_coverage=1 00:22:59.083 --rc genhtml_function_coverage=1 00:22:59.083 --rc genhtml_legend=1 00:22:59.083 --rc geninfo_all_blocks=1 00:22:59.083 --rc geninfo_unexecuted_blocks=1 00:22:59.083 00:22:59.083 ' 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:59.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.083 --rc genhtml_branch_coverage=1 00:22:59.083 --rc genhtml_function_coverage=1 00:22:59.083 --rc genhtml_legend=1 00:22:59.083 --rc geninfo_all_blocks=1 00:22:59.083 --rc geninfo_unexecuted_blocks=1 00:22:59.083 00:22:59.083 ' 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:59.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.083 --rc genhtml_branch_coverage=1 00:22:59.083 --rc genhtml_function_coverage=1 00:22:59.083 --rc genhtml_legend=1 00:22:59.083 --rc geninfo_all_blocks=1 00:22:59.083 --rc geninfo_unexecuted_blocks=1 00:22:59.083 00:22:59.083 ' 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:59.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.083 --rc genhtml_branch_coverage=1 00:22:59.083 --rc genhtml_function_coverage=1 00:22:59.083 --rc genhtml_legend=1 00:22:59.083 --rc geninfo_all_blocks=1 00:22:59.083 --rc geninfo_unexecuted_blocks=1 00:22:59.083 00:22:59.083 ' 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.083 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.342 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5bf81623a27841eb9012fdb62ab0ebc2 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.343 10:50:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:06.029 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:06.029 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:06.029 Found net devices under 0000:86:00.0: cvl_0_0 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:06.029 Found net devices under 0000:86:00.1: cvl_0_1 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.029 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:06.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:23:06.030 00:23:06.030 --- 10.0.0.2 ping statistics --- 00:23:06.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.030 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:23:06.030 00:23:06.030 --- 10.0.0.1 ping statistics --- 00:23:06.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.030 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1768788 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1768788 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1768788 ']' 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 [2024-11-19 10:50:12.550917] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:06.030 [2024-11-19 10:50:12.550967] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.030 [2024-11-19 10:50:12.629867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.030 [2024-11-19 10:50:12.671926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.030 [2024-11-19 10:50:12.671969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.030 [2024-11-19 10:50:12.671977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.030 [2024-11-19 10:50:12.671984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.030 [2024-11-19 10:50:12.671989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.030 [2024-11-19 10:50:12.672555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 [2024-11-19 10:50:12.808176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 null0 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5bf81623a27841eb9012fdb62ab0ebc2 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 [2024-11-19 10:50:12.856430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.030 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 nvme0n1 00:23:06.030 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.030 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:06.030 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.030 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.030 [ 00:23:06.030 { 00:23:06.030 "name": "nvme0n1", 00:23:06.030 "aliases": [ 00:23:06.030 "5bf81623-a278-41eb-9012-fdb62ab0ebc2" 00:23:06.030 ], 00:23:06.030 "product_name": "NVMe disk", 00:23:06.031 "block_size": 512, 00:23:06.031 "num_blocks": 2097152, 00:23:06.031 "uuid": "5bf81623-a278-41eb-9012-fdb62ab0ebc2", 00:23:06.031 "numa_id": 1, 00:23:06.031 "assigned_rate_limits": { 00:23:06.031 "rw_ios_per_sec": 0, 00:23:06.031 "rw_mbytes_per_sec": 0, 00:23:06.031 "r_mbytes_per_sec": 0, 00:23:06.031 "w_mbytes_per_sec": 0 00:23:06.031 }, 00:23:06.031 "claimed": false, 00:23:06.031 "zoned": false, 00:23:06.031 "supported_io_types": { 00:23:06.031 "read": true, 00:23:06.031 "write": true, 00:23:06.031 "unmap": false, 00:23:06.031 "flush": true, 00:23:06.031 "reset": true, 00:23:06.031 "nvme_admin": true, 00:23:06.031 "nvme_io": true, 00:23:06.031 "nvme_io_md": false, 00:23:06.031 "write_zeroes": true, 00:23:06.031 "zcopy": false, 00:23:06.031 "get_zone_info": false, 00:23:06.031 "zone_management": false, 00:23:06.031 "zone_append": false, 00:23:06.031 "compare": true, 00:23:06.031 "compare_and_write": true, 00:23:06.031 "abort": true, 00:23:06.031 "seek_hole": false, 00:23:06.031 "seek_data": false, 00:23:06.031 "copy": true, 00:23:06.031 "nvme_iov_md": false 00:23:06.031 }, 00:23:06.031 "memory_domains": [ 00:23:06.031 { 00:23:06.031 "dma_device_id": "system", 00:23:06.031 "dma_device_type": 1 00:23:06.031 } 00:23:06.031 ], 00:23:06.031 "driver_specific": { 00:23:06.031 "nvme": [ 00:23:06.031 { 00:23:06.031 "trid": { 00:23:06.031 "trtype": "TCP", 00:23:06.031 "adrfam": "IPv4", 00:23:06.031 "traddr": "10.0.0.2", 00:23:06.031 "trsvcid": "4420", 00:23:06.031 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:06.031 }, 00:23:06.031 "ctrlr_data": { 00:23:06.031 "cntlid": 1, 00:23:06.031 "vendor_id": "0x8086", 00:23:06.031 "model_number": "SPDK bdev Controller", 00:23:06.031 "serial_number": "00000000000000000000", 00:23:06.031 "firmware_revision": "25.01", 00:23:06.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:06.031 "oacs": { 00:23:06.031 "security": 0, 00:23:06.031 "format": 0, 00:23:06.031 "firmware": 0, 00:23:06.031 "ns_manage": 0 00:23:06.031 }, 00:23:06.031 "multi_ctrlr": true, 00:23:06.031 "ana_reporting": false 00:23:06.031 }, 00:23:06.031 "vs": { 00:23:06.031 "nvme_version": "1.3" 00:23:06.031 }, 00:23:06.031 "ns_data": { 00:23:06.031 "id": 1, 00:23:06.031 "can_share": true 00:23:06.031 } 00:23:06.031 } 00:23:06.031 ], 00:23:06.031 "mp_policy": "active_passive" 00:23:06.031 } 00:23:06.031 } 00:23:06.031 ] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.031 [2024-11-19 10:50:13.116964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:06.031 [2024-11-19 10:50:13.117019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09220 (9): Bad file descriptor 00:23:06.031 [2024-11-19 10:50:13.249033] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.031 [ 00:23:06.031 { 00:23:06.031 "name": "nvme0n1", 00:23:06.031 "aliases": [ 00:23:06.031 "5bf81623-a278-41eb-9012-fdb62ab0ebc2" 00:23:06.031 ], 00:23:06.031 "product_name": "NVMe disk", 00:23:06.031 "block_size": 512, 00:23:06.031 "num_blocks": 2097152, 00:23:06.031 "uuid": "5bf81623-a278-41eb-9012-fdb62ab0ebc2", 00:23:06.031 "numa_id": 1, 00:23:06.031 "assigned_rate_limits": { 00:23:06.031 "rw_ios_per_sec": 0, 00:23:06.031 "rw_mbytes_per_sec": 0, 00:23:06.031 "r_mbytes_per_sec": 0, 00:23:06.031 "w_mbytes_per_sec": 0 00:23:06.031 }, 00:23:06.031 "claimed": false, 00:23:06.031 "zoned": false, 00:23:06.031 "supported_io_types": { 00:23:06.031 "read": true, 00:23:06.031 "write": true, 00:23:06.031 "unmap": false, 00:23:06.031 "flush": true, 00:23:06.031 "reset": true, 00:23:06.031 "nvme_admin": true, 00:23:06.031 "nvme_io": true, 00:23:06.031 "nvme_io_md": false, 00:23:06.031 "write_zeroes": true, 00:23:06.031 "zcopy": false, 00:23:06.031 "get_zone_info": false, 00:23:06.031 "zone_management": false, 00:23:06.031 "zone_append": false, 00:23:06.031 "compare": true, 00:23:06.031 "compare_and_write": true, 00:23:06.031 "abort": true, 00:23:06.031 "seek_hole": false, 00:23:06.031 "seek_data": false, 00:23:06.031 "copy": true, 00:23:06.031 "nvme_iov_md": false 00:23:06.031 }, 00:23:06.031 "memory_domains": [ 00:23:06.031 { 00:23:06.031 "dma_device_id": "system", 00:23:06.031 "dma_device_type": 1 00:23:06.031 } 00:23:06.031 ], 00:23:06.031 "driver_specific": { 00:23:06.031 "nvme": [ 00:23:06.031 { 00:23:06.031 "trid": { 00:23:06.031 "trtype": "TCP", 00:23:06.031 "adrfam": "IPv4", 00:23:06.031 "traddr": "10.0.0.2", 00:23:06.031 "trsvcid": "4420", 00:23:06.031 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:06.031 }, 00:23:06.031 "ctrlr_data": { 00:23:06.031 "cntlid": 2, 00:23:06.031 "vendor_id": "0x8086", 00:23:06.031 "model_number": "SPDK bdev Controller", 00:23:06.031 "serial_number": "00000000000000000000", 00:23:06.031 "firmware_revision": "25.01", 00:23:06.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:06.031 "oacs": { 00:23:06.031 "security": 0, 00:23:06.031 "format": 0, 00:23:06.031 "firmware": 0, 00:23:06.031 "ns_manage": 0 00:23:06.031 }, 00:23:06.031 "multi_ctrlr": true, 00:23:06.031 "ana_reporting": false 00:23:06.031 }, 00:23:06.031 "vs": { 00:23:06.031 "nvme_version": "1.3" 00:23:06.031 }, 00:23:06.031 "ns_data": { 00:23:06.031 "id": 1, 00:23:06.031 "can_share": true 00:23:06.031 } 00:23:06.031 } 00:23:06.031 ], 00:23:06.031 "mp_policy": "active_passive" 00:23:06.031 } 00:23:06.031 } 00:23:06.031 ] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0X8dv5mrm9 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0X8dv5mrm9 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.0X8dv5mrm9 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.031 [2024-11-19 10:50:13.321562] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.031 [2024-11-19 10:50:13.321658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.031 [2024-11-19 10:50:13.341630] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.031 nvme0n1 00:23:06.031 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.032 [ 00:23:06.032 { 00:23:06.032 "name": "nvme0n1", 00:23:06.032 "aliases": [ 00:23:06.032 "5bf81623-a278-41eb-9012-fdb62ab0ebc2" 00:23:06.032 ], 00:23:06.032 "product_name": "NVMe disk", 00:23:06.032 "block_size": 512, 00:23:06.032 "num_blocks": 2097152, 00:23:06.032 "uuid": "5bf81623-a278-41eb-9012-fdb62ab0ebc2", 00:23:06.032 "numa_id": 1, 00:23:06.032 "assigned_rate_limits": { 00:23:06.032 "rw_ios_per_sec": 0, 00:23:06.032 "rw_mbytes_per_sec": 0, 00:23:06.032 "r_mbytes_per_sec": 0, 00:23:06.032 "w_mbytes_per_sec": 0 00:23:06.032 }, 00:23:06.032 "claimed": false, 00:23:06.032 "zoned": false, 00:23:06.032 "supported_io_types": { 00:23:06.032 "read": true, 00:23:06.032 "write": true, 00:23:06.032 "unmap": false, 00:23:06.032 "flush": true, 00:23:06.032 "reset": true, 00:23:06.032 "nvme_admin": true, 00:23:06.032 "nvme_io": true, 00:23:06.032 "nvme_io_md": false, 00:23:06.032 "write_zeroes": true, 00:23:06.032 "zcopy": false, 00:23:06.032 "get_zone_info": false, 00:23:06.032 "zone_management": false, 00:23:06.032 "zone_append": false, 00:23:06.032 "compare": true, 00:23:06.032 "compare_and_write": true, 00:23:06.032 "abort": true, 00:23:06.032 "seek_hole": false, 00:23:06.032 "seek_data": false, 00:23:06.032 "copy": true, 00:23:06.032 "nvme_iov_md": false 00:23:06.032 }, 00:23:06.032 "memory_domains": [ 00:23:06.032 { 00:23:06.032 "dma_device_id": "system", 00:23:06.032 "dma_device_type": 1 00:23:06.032 } 00:23:06.032 ], 00:23:06.032 "driver_specific": { 00:23:06.032 "nvme": [ 00:23:06.032 { 00:23:06.032 "trid": { 00:23:06.032 "trtype": "TCP", 00:23:06.032 "adrfam": "IPv4", 00:23:06.032 "traddr": "10.0.0.2", 00:23:06.032 "trsvcid": "4421", 00:23:06.032 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:06.032 }, 00:23:06.032 "ctrlr_data": { 00:23:06.032 "cntlid": 3, 00:23:06.032 "vendor_id": "0x8086", 00:23:06.032 "model_number": "SPDK bdev Controller", 00:23:06.032 "serial_number": "00000000000000000000", 00:23:06.032 "firmware_revision": "25.01", 00:23:06.032 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:06.032 "oacs": { 00:23:06.032 "security": 0, 00:23:06.032 "format": 0, 00:23:06.032 "firmware": 0, 00:23:06.032 "ns_manage": 0 00:23:06.032 }, 00:23:06.032 "multi_ctrlr": true, 00:23:06.032 "ana_reporting": false 00:23:06.032 }, 00:23:06.032 "vs": { 00:23:06.032 "nvme_version": "1.3" 00:23:06.032 }, 00:23:06.032 "ns_data": { 00:23:06.032 "id": 1, 00:23:06.032 "can_share": true 00:23:06.032 } 00:23:06.032 } 00:23:06.032 ], 00:23:06.032 "mp_policy": "active_passive" 00:23:06.032 } 00:23:06.032 } 00:23:06.032 ] 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.032 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.0X8dv5mrm9 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.342 rmmod nvme_tcp 00:23:06.342 rmmod nvme_fabrics 00:23:06.342 rmmod nvme_keyring 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1768788 ']' 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1768788 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1768788 ']' 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1768788 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768788 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768788' 00:23:06.342 killing process with pid 1768788 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1768788 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1768788 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.342 10:50:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:08.889 00:23:08.889 real 0m9.464s 00:23:08.889 user 0m3.085s 00:23:08.889 sys 0m4.809s 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:08.889 ************************************ 00:23:08.889 END TEST nvmf_async_init 00:23:08.889 ************************************ 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.889 ************************************ 00:23:08.889 START TEST dma 00:23:08.889 ************************************ 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:08.889 * Looking for test storage... 00:23:08.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:08.889 10:50:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.889 --rc genhtml_branch_coverage=1 00:23:08.889 --rc genhtml_function_coverage=1 00:23:08.889 --rc genhtml_legend=1 00:23:08.889 --rc geninfo_all_blocks=1 00:23:08.889 --rc geninfo_unexecuted_blocks=1 00:23:08.889 00:23:08.889 ' 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.889 --rc genhtml_branch_coverage=1 00:23:08.889 --rc genhtml_function_coverage=1 00:23:08.889 --rc genhtml_legend=1 00:23:08.889 --rc geninfo_all_blocks=1 00:23:08.889 --rc geninfo_unexecuted_blocks=1 00:23:08.889 00:23:08.889 ' 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.889 --rc genhtml_branch_coverage=1 00:23:08.889 --rc genhtml_function_coverage=1 00:23:08.889 --rc genhtml_legend=1 00:23:08.889 --rc geninfo_all_blocks=1 00:23:08.889 --rc geninfo_unexecuted_blocks=1 00:23:08.889 00:23:08.889 ' 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.889 --rc genhtml_branch_coverage=1 00:23:08.889 --rc genhtml_function_coverage=1 00:23:08.889 --rc genhtml_legend=1 00:23:08.889 --rc geninfo_all_blocks=1 00:23:08.889 --rc geninfo_unexecuted_blocks=1 00:23:08.889 00:23:08.889 ' 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.889 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:08.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:08.890 00:23:08.890 real 0m0.210s 00:23:08.890 user 0m0.128s 00:23:08.890 sys 0m0.094s 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:08.890 ************************************ 00:23:08.890 END TEST dma 00:23:08.890 ************************************ 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.890 ************************************ 00:23:08.890 START TEST nvmf_identify 00:23:08.890 ************************************ 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:08.890 * Looking for test storage... 00:23:08.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:08.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.890 --rc genhtml_branch_coverage=1 00:23:08.890 --rc genhtml_function_coverage=1 00:23:08.890 --rc genhtml_legend=1 00:23:08.890 --rc geninfo_all_blocks=1 00:23:08.890 --rc geninfo_unexecuted_blocks=1 00:23:08.890 00:23:08.890 ' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:08.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.890 --rc genhtml_branch_coverage=1 00:23:08.890 --rc genhtml_function_coverage=1 00:23:08.890 --rc genhtml_legend=1 00:23:08.890 --rc geninfo_all_blocks=1 00:23:08.890 --rc geninfo_unexecuted_blocks=1 00:23:08.890 00:23:08.890 ' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:08.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.890 --rc genhtml_branch_coverage=1 00:23:08.890 --rc genhtml_function_coverage=1 00:23:08.890 --rc genhtml_legend=1 00:23:08.890 --rc geninfo_all_blocks=1 00:23:08.890 --rc geninfo_unexecuted_blocks=1 00:23:08.890 00:23:08.890 ' 00:23:08.890 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:08.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.890 --rc genhtml_branch_coverage=1 00:23:08.890 --rc genhtml_function_coverage=1 00:23:08.890 --rc genhtml_legend=1 00:23:08.890 --rc geninfo_all_blocks=1 00:23:08.890 --rc geninfo_unexecuted_blocks=1 00:23:08.890 00:23:08.890 ' 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.891 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.150 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.151 10:50:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:15.729 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:15.729 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:15.729 Found net devices under 0000:86:00.0: cvl_0_0 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:15.729 Found net devices under 0000:86:00.1: cvl_0_1 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.729 10:50:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:23:15.729 00:23:15.729 --- 10.0.0.2 ping statistics --- 00:23:15.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.729 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:23:15.729 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:23:15.729 00:23:15.729 --- 10.0.0.1 ping statistics --- 00:23:15.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.730 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1772607 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1772607 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1772607 ']' 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 [2024-11-19 10:50:22.300175] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:15.730 [2024-11-19 10:50:22.300218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.730 [2024-11-19 10:50:22.380755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.730 [2024-11-19 10:50:22.424449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.730 [2024-11-19 10:50:22.424491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.730 [2024-11-19 10:50:22.424500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.730 [2024-11-19 10:50:22.424507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.730 [2024-11-19 10:50:22.424512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.730 [2024-11-19 10:50:22.426000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.730 [2024-11-19 10:50:22.426113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.730 [2024-11-19 10:50:22.426218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.730 [2024-11-19 10:50:22.426218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 [2024-11-19 10:50:22.527837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 Malloc0 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 [2024-11-19 10:50:22.626747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.730 [ 00:23:15.730 { 00:23:15.730 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:15.730 "subtype": "Discovery", 00:23:15.730 "listen_addresses": [ 00:23:15.730 { 00:23:15.730 "trtype": "TCP", 00:23:15.730 "adrfam": "IPv4", 00:23:15.730 "traddr": "10.0.0.2", 00:23:15.730 "trsvcid": "4420" 00:23:15.730 } 00:23:15.730 ], 00:23:15.730 "allow_any_host": true, 00:23:15.730 "hosts": [] 00:23:15.730 }, 00:23:15.730 { 00:23:15.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.730 "subtype": "NVMe", 00:23:15.730 "listen_addresses": [ 00:23:15.730 { 00:23:15.730 "trtype": "TCP", 00:23:15.730 "adrfam": "IPv4", 00:23:15.730 "traddr": "10.0.0.2", 00:23:15.730 "trsvcid": "4420" 00:23:15.730 } 00:23:15.730 ], 00:23:15.730 "allow_any_host": true, 00:23:15.730 "hosts": [], 00:23:15.730 "serial_number": "SPDK00000000000001", 00:23:15.730 "model_number": "SPDK bdev Controller", 00:23:15.730 "max_namespaces": 32, 00:23:15.730 "min_cntlid": 1, 00:23:15.730 "max_cntlid": 65519, 00:23:15.730 "namespaces": [ 00:23:15.730 { 00:23:15.730 "nsid": 1, 00:23:15.730 "bdev_name": "Malloc0", 00:23:15.730 "name": "Malloc0", 00:23:15.730 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:15.730 "eui64": "ABCDEF0123456789", 00:23:15.730 "uuid": "8c1d15b0-7e79-480c-a1bb-000d6f7d3325" 00:23:15.730 } 00:23:15.730 ] 00:23:15.730 } 00:23:15.730 ] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.730 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:15.730 [2024-11-19 10:50:22.681017] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:15.730 [2024-11-19 10:50:22.681065] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772636 ] 00:23:15.730 [2024-11-19 10:50:22.720911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:15.730 [2024-11-19 10:50:22.724962] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:15.730 [2024-11-19 10:50:22.724969] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:15.730 [2024-11-19 10:50:22.724980] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:15.730 [2024-11-19 10:50:22.724990] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:15.730 [2024-11-19 10:50:22.725553] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:15.730 [2024-11-19 10:50:22.725583] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21ac690 0 00:23:15.730 [2024-11-19 10:50:22.742957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:15.730 [2024-11-19 10:50:22.742972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:15.730 [2024-11-19 10:50:22.742977] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:15.730 [2024-11-19 10:50:22.742980] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:15.730 [2024-11-19 10:50:22.743011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.743016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.743020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.731 [2024-11-19 10:50:22.743032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:15.731 [2024-11-19 10:50:22.743050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.731 [2024-11-19 10:50:22.750958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.731 [2024-11-19 10:50:22.750966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.731 [2024-11-19 10:50:22.750970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.750974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.731 [2024-11-19 10:50:22.750985] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:15.731 [2024-11-19 10:50:22.750994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:15.731 [2024-11-19 10:50:22.750998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:15.731 [2024-11-19 10:50:22.751011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.731 [2024-11-19 10:50:22.751025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.731 [2024-11-19 10:50:22.751038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.731 [2024-11-19 10:50:22.751195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.731 [2024-11-19 10:50:22.751202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.731 [2024-11-19 10:50:22.751205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.731 [2024-11-19 10:50:22.751213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:15.731 [2024-11-19 10:50:22.751219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:15.731 [2024-11-19 10:50:22.751226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.731 [2024-11-19 10:50:22.751239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.731 [2024-11-19 10:50:22.751249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.731 [2024-11-19 10:50:22.751316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.731 [2024-11-19 10:50:22.751322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.731 [2024-11-19 10:50:22.751325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.731 [2024-11-19 10:50:22.751333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:15.731 [2024-11-19 10:50:22.751339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:15.731 [2024-11-19 10:50:22.751345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.731 [2024-11-19 10:50:22.751357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.731 [2024-11-19 10:50:22.751366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.731 [2024-11-19 10:50:22.751443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.731 [2024-11-19 10:50:22.751449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.731 [2024-11-19 10:50:22.751452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.731 [2024-11-19 10:50:22.751463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:15.731 [2024-11-19 10:50:22.751471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.731 [2024-11-19 10:50:22.751483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.731 [2024-11-19 10:50:22.751492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.731 [2024-11-19 10:50:22.751595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.731 [2024-11-19 10:50:22.751601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.731 [2024-11-19 10:50:22.751604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.731 [2024-11-19 10:50:22.751611] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:15.731 [2024-11-19 10:50:22.751616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:15.731 [2024-11-19 10:50:22.751622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:15.731 [2024-11-19 10:50:22.751730] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:15.731 [2024-11-19 10:50:22.751734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:15.731 [2024-11-19 10:50:22.751742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.731 [2024-11-19 10:50:22.751754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.731 [2024-11-19 10:50:22.751763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.731 [2024-11-19 10:50:22.751878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.731 [2024-11-19 10:50:22.751883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.731 [2024-11-19 10:50:22.751886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.731 [2024-11-19 10:50:22.751894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:15.731 [2024-11-19 10:50:22.751902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.751908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.731 [2024-11-19 10:50:22.751914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.731 [2024-11-19 10:50:22.751923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.731 [2024-11-19 10:50:22.751994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.731 [2024-11-19 10:50:22.752000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.731 [2024-11-19 10:50:22.752003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.752009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.731 [2024-11-19 10:50:22.752013] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:15.731 [2024-11-19 10:50:22.752017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:15.731 [2024-11-19 10:50:22.752024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:15.731 [2024-11-19 10:50:22.752033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:15.731 [2024-11-19 10:50:22.752041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.752044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.731 [2024-11-19 10:50:22.752050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.731 [2024-11-19 10:50:22.752060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.731 [2024-11-19 10:50:22.752187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.731 [2024-11-19 10:50:22.752193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.731 [2024-11-19 10:50:22.752196] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.752199] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21ac690): datao=0, datal=4096, cccid=0 00:23:15.731 [2024-11-19 10:50:22.752203] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x220e100) on tqpair(0x21ac690): expected_datao=0, payload_size=4096 00:23:15.731 [2024-11-19 10:50:22.752207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.752214] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.752217] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.752226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.731 [2024-11-19 10:50:22.752232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.731 [2024-11-19 10:50:22.752235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.731 [2024-11-19 10:50:22.752238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.731 [2024-11-19 10:50:22.752245] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:15.731 [2024-11-19 10:50:22.752249] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:15.732 [2024-11-19 10:50:22.752253] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:15.732 [2024-11-19 10:50:22.752260] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:15.732 [2024-11-19 10:50:22.752264] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:15.732 [2024-11-19 10:50:22.752268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:15.732 [2024-11-19 10:50:22.752277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:15.732 [2024-11-19 10:50:22.752284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.752296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.732 [2024-11-19 10:50:22.752308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.732 [2024-11-19 10:50:22.752382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.732 [2024-11-19 10:50:22.752388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.732 [2024-11-19 10:50:22.752391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.732 [2024-11-19 10:50:22.752400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.752412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.732 [2024-11-19 10:50:22.752417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.752428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.732 [2024-11-19 10:50:22.752433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.752445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.732 [2024-11-19 10:50:22.752450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.752461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.732 [2024-11-19 10:50:22.752465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:15.732 [2024-11-19 10:50:22.752473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:15.732 [2024-11-19 10:50:22.752478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.752487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.732 [2024-11-19 10:50:22.752497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e100, cid 0, qid 0 00:23:15.732 [2024-11-19 10:50:22.752502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e280, cid 1, qid 0 00:23:15.732 [2024-11-19 10:50:22.752506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e400, cid 2, qid 0 00:23:15.732 [2024-11-19 10:50:22.752510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.732 [2024-11-19 10:50:22.752514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e700, cid 4, qid 0 00:23:15.732 [2024-11-19 10:50:22.752634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.732 [2024-11-19 10:50:22.752640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.732 [2024-11-19 10:50:22.752644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e700) on tqpair=0x21ac690 00:23:15.732 [2024-11-19 10:50:22.752654] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:15.732 [2024-11-19 10:50:22.752659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:15.732 [2024-11-19 10:50:22.752667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.752676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.732 [2024-11-19 10:50:22.752686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e700, cid 4, qid 0 00:23:15.732 [2024-11-19 10:50:22.752765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.732 [2024-11-19 10:50:22.752770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.732 [2024-11-19 10:50:22.752773] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752776] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21ac690): datao=0, datal=4096, cccid=4 00:23:15.732 [2024-11-19 10:50:22.752780] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x220e700) on tqpair(0x21ac690): expected_datao=0, payload_size=4096 00:23:15.732 [2024-11-19 10:50:22.752784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752801] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.752805] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.732 [2024-11-19 10:50:22.793105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.732 [2024-11-19 10:50:22.793108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e700) on tqpair=0x21ac690 00:23:15.732 [2024-11-19 10:50:22.793123] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:15.732 [2024-11-19 10:50:22.793145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.793156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.732 [2024-11-19 10:50:22.793162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.793174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.732 [2024-11-19 10:50:22.793188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e700, cid 4, qid 0 00:23:15.732 [2024-11-19 10:50:22.793193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e880, cid 5, qid 0 00:23:15.732 [2024-11-19 10:50:22.793309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.732 [2024-11-19 10:50:22.793315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.732 [2024-11-19 10:50:22.793318] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793321] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21ac690): datao=0, datal=1024, cccid=4 00:23:15.732 [2024-11-19 10:50:22.793327] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x220e700) on tqpair(0x21ac690): expected_datao=0, payload_size=1024 00:23:15.732 [2024-11-19 10:50:22.793331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793337] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793340] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.732 [2024-11-19 10:50:22.793350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.732 [2024-11-19 10:50:22.793353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.793356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e880) on tqpair=0x21ac690 00:23:15.732 [2024-11-19 10:50:22.835100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.732 [2024-11-19 10:50:22.835111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.732 [2024-11-19 10:50:22.835114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.835117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e700) on tqpair=0x21ac690 00:23:15.732 [2024-11-19 10:50:22.835128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.835132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21ac690) 00:23:15.732 [2024-11-19 10:50:22.835139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.732 [2024-11-19 10:50:22.835155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e700, cid 4, qid 0 00:23:15.732 [2024-11-19 10:50:22.835277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.732 [2024-11-19 10:50:22.835283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.732 [2024-11-19 10:50:22.835286] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.835289] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21ac690): datao=0, datal=3072, cccid=4 00:23:15.732 [2024-11-19 10:50:22.835293] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x220e700) on tqpair(0x21ac690): expected_datao=0, payload_size=3072 00:23:15.732 [2024-11-19 10:50:22.835297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.835303] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.835307] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.732 [2024-11-19 10:50:22.835339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.733 [2024-11-19 10:50:22.835345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.733 [2024-11-19 10:50:22.835348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.733 [2024-11-19 10:50:22.835351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e700) on tqpair=0x21ac690 00:23:15.733 [2024-11-19 10:50:22.835359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.733 [2024-11-19 10:50:22.835362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21ac690) 00:23:15.733 [2024-11-19 10:50:22.835367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.733 [2024-11-19 10:50:22.835381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e700, cid 4, qid 0 00:23:15.733 [2024-11-19 10:50:22.835471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.733 [2024-11-19 10:50:22.835476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.733 [2024-11-19 10:50:22.835479] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.733 [2024-11-19 10:50:22.835482] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21ac690): datao=0, datal=8, cccid=4 00:23:15.733 [2024-11-19 10:50:22.835486] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x220e700) on tqpair(0x21ac690): expected_datao=0, payload_size=8 00:23:15.733 [2024-11-19 10:50:22.835492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.733 [2024-11-19 10:50:22.835498] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.733 [2024-11-19 10:50:22.835501] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.733 [2024-11-19 10:50:22.877078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.733 [2024-11-19 10:50:22.877089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.733 [2024-11-19 10:50:22.877092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.733 [2024-11-19 10:50:22.877096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e700) on tqpair=0x21ac690 00:23:15.733 ===================================================== 00:23:15.733 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:15.733 ===================================================== 00:23:15.733 Controller Capabilities/Features 00:23:15.733 ================================ 00:23:15.733 Vendor ID: 0000 00:23:15.733 Subsystem Vendor ID: 0000 00:23:15.733 Serial Number: .................... 00:23:15.733 Model Number: ........................................ 00:23:15.733 Firmware Version: 25.01 00:23:15.733 Recommended Arb Burst: 0 00:23:15.733 IEEE OUI Identifier: 00 00 00 00:23:15.733 Multi-path I/O 00:23:15.733 May have multiple subsystem ports: No 00:23:15.733 May have multiple controllers: No 00:23:15.733 Associated with SR-IOV VF: No 00:23:15.733 Max Data Transfer Size: 131072 00:23:15.733 Max Number of Namespaces: 0 00:23:15.733 Max Number of I/O Queues: 1024 00:23:15.733 NVMe Specification Version (VS): 1.3 00:23:15.733 NVMe Specification Version (Identify): 1.3 00:23:15.733 Maximum Queue Entries: 128 00:23:15.733 Contiguous Queues Required: Yes 00:23:15.733 Arbitration Mechanisms Supported 00:23:15.733 Weighted Round Robin: Not Supported 00:23:15.733 Vendor Specific: Not Supported 00:23:15.733 Reset Timeout: 15000 ms 00:23:15.733 Doorbell Stride: 4 bytes 00:23:15.733 NVM Subsystem Reset: Not Supported 00:23:15.733 Command Sets Supported 00:23:15.733 NVM Command Set: Supported 00:23:15.733 Boot Partition: Not Supported 00:23:15.733 Memory Page Size Minimum: 4096 bytes 00:23:15.733 Memory Page Size Maximum: 4096 bytes 00:23:15.733 Persistent Memory Region: Not Supported 00:23:15.733 Optional Asynchronous Events Supported 00:23:15.733 Namespace Attribute Notices: Not Supported 00:23:15.733 Firmware Activation Notices: Not Supported 00:23:15.733 ANA Change Notices: Not Supported 00:23:15.733 PLE Aggregate Log Change Notices: Not Supported 00:23:15.733 LBA Status Info Alert Notices: Not Supported 00:23:15.733 EGE Aggregate Log Change Notices: Not Supported 00:23:15.733 Normal NVM Subsystem Shutdown event: Not Supported 00:23:15.733 Zone Descriptor Change Notices: Not Supported 00:23:15.733 Discovery Log Change Notices: Supported 00:23:15.733 Controller Attributes 00:23:15.733 128-bit Host Identifier: Not Supported 00:23:15.733 Non-Operational Permissive Mode: Not Supported 00:23:15.733 NVM Sets: Not Supported 00:23:15.733 Read Recovery Levels: Not Supported 00:23:15.733 Endurance Groups: Not Supported 00:23:15.733 Predictable Latency Mode: Not Supported 00:23:15.733 Traffic Based Keep ALive: Not Supported 00:23:15.733 Namespace Granularity: Not Supported 00:23:15.733 SQ Associations: Not Supported 00:23:15.733 UUID List: Not Supported 00:23:15.733 Multi-Domain Subsystem: Not Supported 00:23:15.733 Fixed Capacity Management: Not Supported 00:23:15.733 Variable Capacity Management: Not Supported 00:23:15.733 Delete Endurance Group: Not Supported 00:23:15.733 Delete NVM Set: Not Supported 00:23:15.733 Extended LBA Formats Supported: Not Supported 00:23:15.733 Flexible Data Placement Supported: Not Supported 00:23:15.733 00:23:15.733 Controller Memory Buffer Support 00:23:15.733 ================================ 00:23:15.733 Supported: No 00:23:15.733 00:23:15.733 Persistent Memory Region Support 00:23:15.733 ================================ 00:23:15.733 Supported: No 00:23:15.733 00:23:15.733 Admin Command Set Attributes 00:23:15.733 ============================ 00:23:15.733 Security Send/Receive: Not Supported 00:23:15.733 Format NVM: Not Supported 00:23:15.733 Firmware Activate/Download: Not Supported 00:23:15.733 Namespace Management: Not Supported 00:23:15.733 Device Self-Test: Not Supported 00:23:15.733 Directives: Not Supported 00:23:15.733 NVMe-MI: Not Supported 00:23:15.733 Virtualization Management: Not Supported 00:23:15.733 Doorbell Buffer Config: Not Supported 00:23:15.733 Get LBA Status Capability: Not Supported 00:23:15.733 Command & Feature Lockdown Capability: Not Supported 00:23:15.733 Abort Command Limit: 1 00:23:15.733 Async Event Request Limit: 4 00:23:15.733 Number of Firmware Slots: N/A 00:23:15.733 Firmware Slot 1 Read-Only: N/A 00:23:15.733 Firmware Activation Without Reset: N/A 00:23:15.733 Multiple Update Detection Support: N/A 00:23:15.733 Firmware Update Granularity: No Information Provided 00:23:15.733 Per-Namespace SMART Log: No 00:23:15.733 Asymmetric Namespace Access Log Page: Not Supported 00:23:15.733 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:15.733 Command Effects Log Page: Not Supported 00:23:15.733 Get Log Page Extended Data: Supported 00:23:15.733 Telemetry Log Pages: Not Supported 00:23:15.733 Persistent Event Log Pages: Not Supported 00:23:15.733 Supported Log Pages Log Page: May Support 00:23:15.733 Commands Supported & Effects Log Page: Not Supported 00:23:15.733 Feature Identifiers & Effects Log Page:May Support 00:23:15.733 NVMe-MI Commands & Effects Log Page: May Support 00:23:15.733 Data Area 4 for Telemetry Log: Not Supported 00:23:15.733 Error Log Page Entries Supported: 128 00:23:15.733 Keep Alive: Not Supported 00:23:15.733 00:23:15.733 NVM Command Set Attributes 00:23:15.733 ========================== 00:23:15.733 Submission Queue Entry Size 00:23:15.733 Max: 1 00:23:15.733 Min: 1 00:23:15.733 Completion Queue Entry Size 00:23:15.733 Max: 1 00:23:15.733 Min: 1 00:23:15.733 Number of Namespaces: 0 00:23:15.733 Compare Command: Not Supported 00:23:15.733 Write Uncorrectable Command: Not Supported 00:23:15.733 Dataset Management Command: Not Supported 00:23:15.733 Write Zeroes Command: Not Supported 00:23:15.733 Set Features Save Field: Not Supported 00:23:15.733 Reservations: Not Supported 00:23:15.733 Timestamp: Not Supported 00:23:15.733 Copy: Not Supported 00:23:15.734 Volatile Write Cache: Not Present 00:23:15.734 Atomic Write Unit (Normal): 1 00:23:15.734 Atomic Write Unit (PFail): 1 00:23:15.734 Atomic Compare & Write Unit: 1 00:23:15.734 Fused Compare & Write: Supported 00:23:15.734 Scatter-Gather List 00:23:15.734 SGL Command Set: Supported 00:23:15.734 SGL Keyed: Supported 00:23:15.734 SGL Bit Bucket Descriptor: Not Supported 00:23:15.734 SGL Metadata Pointer: Not Supported 00:23:15.734 Oversized SGL: Not Supported 00:23:15.734 SGL Metadata Address: Not Supported 00:23:15.734 SGL Offset: Supported 00:23:15.734 Transport SGL Data Block: Not Supported 00:23:15.734 Replay Protected Memory Block: Not Supported 00:23:15.734 00:23:15.734 Firmware Slot Information 00:23:15.734 ========================= 00:23:15.734 Active slot: 0 00:23:15.734 00:23:15.734 00:23:15.734 Error Log 00:23:15.734 ========= 00:23:15.734 00:23:15.734 Active Namespaces 00:23:15.734 ================= 00:23:15.734 Discovery Log Page 00:23:15.734 ================== 00:23:15.734 Generation Counter: 2 00:23:15.734 Number of Records: 2 00:23:15.734 Record Format: 0 00:23:15.734 00:23:15.734 Discovery Log Entry 0 00:23:15.734 ---------------------- 00:23:15.734 Transport Type: 3 (TCP) 00:23:15.734 Address Family: 1 (IPv4) 00:23:15.734 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:15.734 Entry Flags: 00:23:15.734 Duplicate Returned Information: 1 00:23:15.734 Explicit Persistent Connection Support for Discovery: 1 00:23:15.734 Transport Requirements: 00:23:15.734 Secure Channel: Not Required 00:23:15.734 Port ID: 0 (0x0000) 00:23:15.734 Controller ID: 65535 (0xffff) 00:23:15.734 Admin Max SQ Size: 128 00:23:15.734 Transport Service Identifier: 4420 00:23:15.734 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:15.734 Transport Address: 10.0.0.2 00:23:15.734 Discovery Log Entry 1 00:23:15.734 ---------------------- 00:23:15.734 Transport Type: 3 (TCP) 00:23:15.734 Address Family: 1 (IPv4) 00:23:15.734 Subsystem Type: 2 (NVM Subsystem) 00:23:15.734 Entry Flags: 00:23:15.734 Duplicate Returned Information: 0 00:23:15.734 Explicit Persistent Connection Support for Discovery: 0 00:23:15.734 Transport Requirements: 00:23:15.734 Secure Channel: Not Required 00:23:15.734 Port ID: 0 (0x0000) 00:23:15.734 Controller ID: 65535 (0xffff) 00:23:15.734 Admin Max SQ Size: 128 00:23:15.734 Transport Service Identifier: 4420 00:23:15.734 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:15.734 Transport Address: 10.0.0.2 [2024-11-19 10:50:22.877179] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:15.734 [2024-11-19 10:50:22.877190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e100) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.734 [2024-11-19 10:50:22.877201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e280) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.734 [2024-11-19 10:50:22.877209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e400) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.734 [2024-11-19 10:50:22.877218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.734 [2024-11-19 10:50:22.877232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.734 [2024-11-19 10:50:22.877246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.734 [2024-11-19 10:50:22.877259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.734 [2024-11-19 10:50:22.877321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.734 [2024-11-19 10:50:22.877327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.734 [2024-11-19 10:50:22.877330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.734 [2024-11-19 10:50:22.877353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.734 [2024-11-19 10:50:22.877365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.734 [2024-11-19 10:50:22.877441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.734 [2024-11-19 10:50:22.877447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.734 [2024-11-19 10:50:22.877450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877457] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:15.734 [2024-11-19 10:50:22.877464] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:15.734 [2024-11-19 10:50:22.877472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.734 [2024-11-19 10:50:22.877485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.734 [2024-11-19 10:50:22.877496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.734 [2024-11-19 10:50:22.877559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.734 [2024-11-19 10:50:22.877565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.734 [2024-11-19 10:50:22.877568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.734 [2024-11-19 10:50:22.877594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.734 [2024-11-19 10:50:22.877604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.734 [2024-11-19 10:50:22.877675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.734 [2024-11-19 10:50:22.877681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.734 [2024-11-19 10:50:22.877684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.734 [2024-11-19 10:50:22.877709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.734 [2024-11-19 10:50:22.877718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.734 [2024-11-19 10:50:22.877793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.734 [2024-11-19 10:50:22.877799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.734 [2024-11-19 10:50:22.877802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.734 [2024-11-19 10:50:22.877826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.734 [2024-11-19 10:50:22.877836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.734 [2024-11-19 10:50:22.877899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.734 [2024-11-19 10:50:22.877905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.734 [2024-11-19 10:50:22.877908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.734 [2024-11-19 10:50:22.877922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.734 [2024-11-19 10:50:22.877929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.734 [2024-11-19 10:50:22.877934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.735 [2024-11-19 10:50:22.877944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.735 [2024-11-19 10:50:22.881959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.735 [2024-11-19 10:50:22.881965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.735 [2024-11-19 10:50:22.881968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.881972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.735 [2024-11-19 10:50:22.881981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.881984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.881988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21ac690) 00:23:15.735 [2024-11-19 10:50:22.881994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.735 [2024-11-19 10:50:22.882005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x220e580, cid 3, qid 0 00:23:15.735 [2024-11-19 10:50:22.882155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.735 [2024-11-19 10:50:22.882161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.735 [2024-11-19 10:50:22.882165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.882168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x220e580) on tqpair=0x21ac690 00:23:15.735 [2024-11-19 10:50:22.882175] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:23:15.735 00:23:15.735 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:15.735 [2024-11-19 10:50:22.919319] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:15.735 [2024-11-19 10:50:22.919357] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772642 ] 00:23:15.735 [2024-11-19 10:50:22.960587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:15.735 [2024-11-19 10:50:22.960629] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:15.735 [2024-11-19 10:50:22.960634] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:15.735 [2024-11-19 10:50:22.960645] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:15.735 [2024-11-19 10:50:22.960654] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:15.735 [2024-11-19 10:50:22.964129] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:15.735 [2024-11-19 10:50:22.964153] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ead690 0 00:23:15.735 [2024-11-19 10:50:22.971961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:15.735 [2024-11-19 10:50:22.971974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:15.735 [2024-11-19 10:50:22.971978] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:15.735 [2024-11-19 10:50:22.971981] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:15.735 [2024-11-19 10:50:22.972008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.972013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.972016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.735 [2024-11-19 10:50:22.972026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:15.735 [2024-11-19 10:50:22.972043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.735 [2024-11-19 10:50:22.979956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.735 [2024-11-19 10:50:22.979965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.735 [2024-11-19 10:50:22.979968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.979971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.735 [2024-11-19 10:50:22.979979] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:15.735 [2024-11-19 10:50:22.979985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:15.735 [2024-11-19 10:50:22.979990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:15.735 [2024-11-19 10:50:22.980001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.735 [2024-11-19 10:50:22.980015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.735 [2024-11-19 10:50:22.980028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.735 [2024-11-19 10:50:22.980201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.735 [2024-11-19 10:50:22.980207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.735 [2024-11-19 10:50:22.980210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.735 [2024-11-19 10:50:22.980218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:15.735 [2024-11-19 10:50:22.980225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:15.735 [2024-11-19 10:50:22.980232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.735 [2024-11-19 10:50:22.980244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.735 [2024-11-19 10:50:22.980254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.735 [2024-11-19 10:50:22.980325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.735 [2024-11-19 10:50:22.980331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.735 [2024-11-19 10:50:22.980334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.735 [2024-11-19 10:50:22.980344] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:15.735 [2024-11-19 10:50:22.980351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:15.735 [2024-11-19 10:50:22.980357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.735 [2024-11-19 10:50:22.980369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.735 [2024-11-19 10:50:22.980379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.735 [2024-11-19 10:50:22.980443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.735 [2024-11-19 10:50:22.980448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.735 [2024-11-19 10:50:22.980452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.735 [2024-11-19 10:50:22.980459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:15.735 [2024-11-19 10:50:22.980467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.735 [2024-11-19 10:50:22.980480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.735 [2024-11-19 10:50:22.980489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.735 [2024-11-19 10:50:22.980550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.735 [2024-11-19 10:50:22.980556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.735 [2024-11-19 10:50:22.980559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.735 [2024-11-19 10:50:22.980566] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:15.735 [2024-11-19 10:50:22.980570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:15.735 [2024-11-19 10:50:22.980577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:15.735 [2024-11-19 10:50:22.980685] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:15.735 [2024-11-19 10:50:22.980689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:15.735 [2024-11-19 10:50:22.980696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.735 [2024-11-19 10:50:22.980708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.735 [2024-11-19 10:50:22.980718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.735 [2024-11-19 10:50:22.980792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.735 [2024-11-19 10:50:22.980798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.735 [2024-11-19 10:50:22.980802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.735 [2024-11-19 10:50:22.980805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.735 [2024-11-19 10:50:22.980809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:15.735 [2024-11-19 10:50:22.980818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.980822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.980825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.980831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.736 [2024-11-19 10:50:22.980841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.736 [2024-11-19 10:50:22.980902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.736 [2024-11-19 10:50:22.980908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.736 [2024-11-19 10:50:22.980911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.980914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.736 [2024-11-19 10:50:22.980918] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:15.736 [2024-11-19 10:50:22.980922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.980929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:15.736 [2024-11-19 10:50:22.980937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.980945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.980954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.980960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.736 [2024-11-19 10:50:22.980970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.736 [2024-11-19 10:50:22.981069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.736 [2024-11-19 10:50:22.981076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.736 [2024-11-19 10:50:22.981079] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981082] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ead690): datao=0, datal=4096, cccid=0 00:23:15.736 [2024-11-19 10:50:22.981086] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f0f100) on tqpair(0x1ead690): expected_datao=0, payload_size=4096 00:23:15.736 [2024-11-19 10:50:22.981090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981096] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981099] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.736 [2024-11-19 10:50:22.981114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.736 [2024-11-19 10:50:22.981117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.736 [2024-11-19 10:50:22.981127] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:15.736 [2024-11-19 10:50:22.981132] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:15.736 [2024-11-19 10:50:22.981137] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:15.736 [2024-11-19 10:50:22.981143] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:15.736 [2024-11-19 10:50:22.981147] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:15.736 [2024-11-19 10:50:22.981151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.981179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.736 [2024-11-19 10:50:22.981191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.736 [2024-11-19 10:50:22.981252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.736 [2024-11-19 10:50:22.981257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.736 [2024-11-19 10:50:22.981261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.736 [2024-11-19 10:50:22.981269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.981281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.736 [2024-11-19 10:50:22.981286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.981297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.736 [2024-11-19 10:50:22.981302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.981314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.736 [2024-11-19 10:50:22.981319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.981330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.736 [2024-11-19 10:50:22.981334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.981358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.736 [2024-11-19 10:50:22.981369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f100, cid 0, qid 0 00:23:15.736 [2024-11-19 10:50:22.981374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f280, cid 1, qid 0 00:23:15.736 [2024-11-19 10:50:22.981378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f400, cid 2, qid 0 00:23:15.736 [2024-11-19 10:50:22.981382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.736 [2024-11-19 10:50:22.981386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f700, cid 4, qid 0 00:23:15.736 [2024-11-19 10:50:22.981486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.736 [2024-11-19 10:50:22.981492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.736 [2024-11-19 10:50:22.981495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f700) on tqpair=0x1ead690 00:23:15.736 [2024-11-19 10:50:22.981504] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:15.736 [2024-11-19 10:50:22.981509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.981539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.736 [2024-11-19 10:50:22.981549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f700, cid 4, qid 0 00:23:15.736 [2024-11-19 10:50:22.981616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.736 [2024-11-19 10:50:22.981622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.736 [2024-11-19 10:50:22.981625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f700) on tqpair=0x1ead690 00:23:15.736 [2024-11-19 10:50:22.981681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:15.736 [2024-11-19 10:50:22.981697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.736 [2024-11-19 10:50:22.981701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ead690) 00:23:15.736 [2024-11-19 10:50:22.981706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.736 [2024-11-19 10:50:22.981716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f700, cid 4, qid 0 00:23:15.736 [2024-11-19 10:50:22.981791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.736 [2024-11-19 10:50:22.981798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.736 [2024-11-19 10:50:22.981802] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:22.981805] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ead690): datao=0, datal=4096, cccid=4 00:23:15.737 [2024-11-19 10:50:22.981809] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f0f700) on tqpair(0x1ead690): expected_datao=0, payload_size=4096 00:23:15.737 [2024-11-19 10:50:22.981813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:22.981819] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:22.981822] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.023953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.737 [2024-11-19 10:50:23.023964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.737 [2024-11-19 10:50:23.023967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.023971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f700) on tqpair=0x1ead690 00:23:15.737 [2024-11-19 10:50:23.023981] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:15.737 [2024-11-19 10:50:23.023994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.024004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.024010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.024014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ead690) 00:23:15.737 [2024-11-19 10:50:23.024020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.737 [2024-11-19 10:50:23.024033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f700, cid 4, qid 0 00:23:15.737 [2024-11-19 10:50:23.024131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.737 [2024-11-19 10:50:23.024137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.737 [2024-11-19 10:50:23.024140] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.024143] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ead690): datao=0, datal=4096, cccid=4 00:23:15.737 [2024-11-19 10:50:23.024147] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f0f700) on tqpair(0x1ead690): expected_datao=0, payload_size=4096 00:23:15.737 [2024-11-19 10:50:23.024151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.024157] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.024160] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.066082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.737 [2024-11-19 10:50:23.066094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.737 [2024-11-19 10:50:23.066097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.066101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f700) on tqpair=0x1ead690 00:23:15.737 [2024-11-19 10:50:23.066114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.066124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.066132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.066135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ead690) 00:23:15.737 [2024-11-19 10:50:23.066142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.737 [2024-11-19 10:50:23.066156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f700, cid 4, qid 0 00:23:15.737 [2024-11-19 10:50:23.066231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.737 [2024-11-19 10:50:23.066237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.737 [2024-11-19 10:50:23.066240] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.066244] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ead690): datao=0, datal=4096, cccid=4 00:23:15.737 [2024-11-19 10:50:23.066247] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f0f700) on tqpair(0x1ead690): expected_datao=0, payload_size=4096 00:23:15.737 [2024-11-19 10:50:23.066251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.066262] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.066265] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.110957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.737 [2024-11-19 10:50:23.110966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.737 [2024-11-19 10:50:23.110969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.110972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f700) on tqpair=0x1ead690 00:23:15.737 [2024-11-19 10:50:23.110980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.110987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.110995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.111000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.111005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.111009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.111014] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:15.737 [2024-11-19 10:50:23.111018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:15.737 [2024-11-19 10:50:23.111023] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:15.737 [2024-11-19 10:50:23.111035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ead690) 00:23:15.737 [2024-11-19 10:50:23.111045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.737 [2024-11-19 10:50:23.111051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ead690) 00:23:15.737 [2024-11-19 10:50:23.111063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.737 [2024-11-19 10:50:23.111076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f700, cid 4, qid 0 00:23:15.737 [2024-11-19 10:50:23.111081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f880, cid 5, qid 0 00:23:15.737 [2024-11-19 10:50:23.111157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.737 [2024-11-19 10:50:23.111165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.737 [2024-11-19 10:50:23.111168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f700) on tqpair=0x1ead690 00:23:15.737 [2024-11-19 10:50:23.111177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.737 [2024-11-19 10:50:23.111182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.737 [2024-11-19 10:50:23.111185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f880) on tqpair=0x1ead690 00:23:15.737 [2024-11-19 10:50:23.111197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ead690) 00:23:15.737 [2024-11-19 10:50:23.111206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.737 [2024-11-19 10:50:23.111216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f880, cid 5, qid 0 00:23:15.737 [2024-11-19 10:50:23.111283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.737 [2024-11-19 10:50:23.111289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.737 [2024-11-19 10:50:23.111292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f880) on tqpair=0x1ead690 00:23:15.737 [2024-11-19 10:50:23.111303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ead690) 00:23:15.737 [2024-11-19 10:50:23.111312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.737 [2024-11-19 10:50:23.111321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f880, cid 5, qid 0 00:23:15.737 [2024-11-19 10:50:23.111402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.737 [2024-11-19 10:50:23.111408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.737 [2024-11-19 10:50:23.111411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f880) on tqpair=0x1ead690 00:23:15.737 [2024-11-19 10:50:23.111422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ead690) 00:23:15.737 [2024-11-19 10:50:23.111431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.737 [2024-11-19 10:50:23.111443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f880, cid 5, qid 0 00:23:15.737 [2024-11-19 10:50:23.111520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.737 [2024-11-19 10:50:23.111526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.737 [2024-11-19 10:50:23.111528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f880) on tqpair=0x1ead690 00:23:15.737 [2024-11-19 10:50:23.111544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.737 [2024-11-19 10:50:23.111549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ead690) 00:23:15.737 [2024-11-19 10:50:23.111557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.737 [2024-11-19 10:50:23.111563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ead690) 00:23:15.738 [2024-11-19 10:50:23.111575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.738 [2024-11-19 10:50:23.111581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ead690) 00:23:15.738 [2024-11-19 10:50:23.111590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.738 [2024-11-19 10:50:23.111596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ead690) 00:23:15.738 [2024-11-19 10:50:23.111604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.738 [2024-11-19 10:50:23.111615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f880, cid 5, qid 0 00:23:15.738 [2024-11-19 10:50:23.111619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f700, cid 4, qid 0 00:23:15.738 [2024-11-19 10:50:23.111623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0fa00, cid 6, qid 0 00:23:15.738 [2024-11-19 10:50:23.111627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0fb80, cid 7, qid 0 00:23:15.738 [2024-11-19 10:50:23.111769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.738 [2024-11-19 10:50:23.111776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.738 [2024-11-19 10:50:23.111779] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111782] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ead690): datao=0, datal=8192, cccid=5 00:23:15.738 [2024-11-19 10:50:23.111786] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f0f880) on tqpair(0x1ead690): expected_datao=0, payload_size=8192 00:23:15.738 [2024-11-19 10:50:23.111789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111809] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111812] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.738 [2024-11-19 10:50:23.111822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.738 [2024-11-19 10:50:23.111825] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111828] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ead690): datao=0, datal=512, cccid=4 00:23:15.738 [2024-11-19 10:50:23.111832] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f0f700) on tqpair(0x1ead690): expected_datao=0, payload_size=512 00:23:15.738 [2024-11-19 10:50:23.111836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111841] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111845] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.738 [2024-11-19 10:50:23.111854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.738 [2024-11-19 10:50:23.111857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111860] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ead690): datao=0, datal=512, cccid=6 00:23:15.738 [2024-11-19 10:50:23.111864] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f0fa00) on tqpair(0x1ead690): expected_datao=0, payload_size=512 00:23:15.738 [2024-11-19 10:50:23.111867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111873] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111877] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:15.738 [2024-11-19 10:50:23.111887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:15.738 [2024-11-19 10:50:23.111890] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111893] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ead690): datao=0, datal=4096, cccid=7 00:23:15.738 [2024-11-19 10:50:23.111897] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f0fb80) on tqpair(0x1ead690): expected_datao=0, payload_size=4096 00:23:15.738 [2024-11-19 10:50:23.111901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111906] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111909] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.738 [2024-11-19 10:50:23.111921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.738 [2024-11-19 10:50:23.111924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f880) on tqpair=0x1ead690 00:23:15.738 [2024-11-19 10:50:23.111937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.738 [2024-11-19 10:50:23.111942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.738 [2024-11-19 10:50:23.111945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f700) on tqpair=0x1ead690 00:23:15.738 [2024-11-19 10:50:23.111963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.738 [2024-11-19 10:50:23.111968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.738 [2024-11-19 10:50:23.111971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0fa00) on tqpair=0x1ead690 00:23:15.738 [2024-11-19 10:50:23.111979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.738 [2024-11-19 10:50:23.111985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.738 [2024-11-19 10:50:23.111988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.738 [2024-11-19 10:50:23.111991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0fb80) on tqpair=0x1ead690 00:23:15.738 ===================================================== 00:23:15.738 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.738 ===================================================== 00:23:15.738 Controller Capabilities/Features 00:23:15.738 ================================ 00:23:15.738 Vendor ID: 8086 00:23:15.738 Subsystem Vendor ID: 8086 00:23:15.738 Serial Number: SPDK00000000000001 00:23:15.738 Model Number: SPDK bdev Controller 00:23:15.738 Firmware Version: 25.01 00:23:15.738 Recommended Arb Burst: 6 00:23:15.738 IEEE OUI Identifier: e4 d2 5c 00:23:15.738 Multi-path I/O 00:23:15.738 May have multiple subsystem ports: Yes 00:23:15.738 May have multiple controllers: Yes 00:23:15.738 Associated with SR-IOV VF: No 00:23:15.738 Max Data Transfer Size: 131072 00:23:15.738 Max Number of Namespaces: 32 00:23:15.738 Max Number of I/O Queues: 127 00:23:15.738 NVMe Specification Version (VS): 1.3 00:23:15.738 NVMe Specification Version (Identify): 1.3 00:23:15.738 Maximum Queue Entries: 128 00:23:15.738 Contiguous Queues Required: Yes 00:23:15.738 Arbitration Mechanisms Supported 00:23:15.738 Weighted Round Robin: Not Supported 00:23:15.738 Vendor Specific: Not Supported 00:23:15.738 Reset Timeout: 15000 ms 00:23:15.738 Doorbell Stride: 4 bytes 00:23:15.738 NVM Subsystem Reset: Not Supported 00:23:15.738 Command Sets Supported 00:23:15.738 NVM Command Set: Supported 00:23:15.738 Boot Partition: Not Supported 00:23:15.738 Memory Page Size Minimum: 4096 bytes 00:23:15.738 Memory Page Size Maximum: 4096 bytes 00:23:15.738 Persistent Memory Region: Not Supported 00:23:15.738 Optional Asynchronous Events Supported 00:23:15.738 Namespace Attribute Notices: Supported 00:23:15.738 Firmware Activation Notices: Not Supported 00:23:15.738 ANA Change Notices: Not Supported 00:23:15.738 PLE Aggregate Log Change Notices: Not Supported 00:23:15.738 LBA Status Info Alert Notices: Not Supported 00:23:15.738 EGE Aggregate Log Change Notices: Not Supported 00:23:15.738 Normal NVM Subsystem Shutdown event: Not Supported 00:23:15.738 Zone Descriptor Change Notices: Not Supported 00:23:15.738 Discovery Log Change Notices: Not Supported 00:23:15.738 Controller Attributes 00:23:15.738 128-bit Host Identifier: Supported 00:23:15.738 Non-Operational Permissive Mode: Not Supported 00:23:15.738 NVM Sets: Not Supported 00:23:15.738 Read Recovery Levels: Not Supported 00:23:15.738 Endurance Groups: Not Supported 00:23:15.738 Predictable Latency Mode: Not Supported 00:23:15.738 Traffic Based Keep ALive: Not Supported 00:23:15.738 Namespace Granularity: Not Supported 00:23:15.738 SQ Associations: Not Supported 00:23:15.738 UUID List: Not Supported 00:23:15.738 Multi-Domain Subsystem: Not Supported 00:23:15.738 Fixed Capacity Management: Not Supported 00:23:15.739 Variable Capacity Management: Not Supported 00:23:15.739 Delete Endurance Group: Not Supported 00:23:15.739 Delete NVM Set: Not Supported 00:23:15.739 Extended LBA Formats Supported: Not Supported 00:23:15.739 Flexible Data Placement Supported: Not Supported 00:23:15.739 00:23:15.739 Controller Memory Buffer Support 00:23:15.739 ================================ 00:23:15.739 Supported: No 00:23:15.739 00:23:15.739 Persistent Memory Region Support 00:23:15.739 ================================ 00:23:15.739 Supported: No 00:23:15.739 00:23:15.739 Admin Command Set Attributes 00:23:15.739 ============================ 00:23:15.739 Security Send/Receive: Not Supported 00:23:15.739 Format NVM: Not Supported 00:23:15.739 Firmware Activate/Download: Not Supported 00:23:15.739 Namespace Management: Not Supported 00:23:15.739 Device Self-Test: Not Supported 00:23:15.739 Directives: Not Supported 00:23:15.739 NVMe-MI: Not Supported 00:23:15.739 Virtualization Management: Not Supported 00:23:15.739 Doorbell Buffer Config: Not Supported 00:23:15.739 Get LBA Status Capability: Not Supported 00:23:15.739 Command & Feature Lockdown Capability: Not Supported 00:23:15.739 Abort Command Limit: 4 00:23:15.739 Async Event Request Limit: 4 00:23:15.739 Number of Firmware Slots: N/A 00:23:15.739 Firmware Slot 1 Read-Only: N/A 00:23:15.739 Firmware Activation Without Reset: N/A 00:23:15.739 Multiple Update Detection Support: N/A 00:23:15.739 Firmware Update Granularity: No Information Provided 00:23:15.739 Per-Namespace SMART Log: No 00:23:15.739 Asymmetric Namespace Access Log Page: Not Supported 00:23:15.739 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:15.739 Command Effects Log Page: Supported 00:23:15.739 Get Log Page Extended Data: Supported 00:23:15.739 Telemetry Log Pages: Not Supported 00:23:15.739 Persistent Event Log Pages: Not Supported 00:23:15.739 Supported Log Pages Log Page: May Support 00:23:15.739 Commands Supported & Effects Log Page: Not Supported 00:23:15.739 Feature Identifiers & Effects Log Page:May Support 00:23:15.739 NVMe-MI Commands & Effects Log Page: May Support 00:23:15.739 Data Area 4 for Telemetry Log: Not Supported 00:23:15.739 Error Log Page Entries Supported: 128 00:23:15.739 Keep Alive: Supported 00:23:15.739 Keep Alive Granularity: 10000 ms 00:23:15.739 00:23:15.739 NVM Command Set Attributes 00:23:15.739 ========================== 00:23:15.739 Submission Queue Entry Size 00:23:15.739 Max: 64 00:23:15.739 Min: 64 00:23:15.739 Completion Queue Entry Size 00:23:15.739 Max: 16 00:23:15.739 Min: 16 00:23:15.739 Number of Namespaces: 32 00:23:15.739 Compare Command: Supported 00:23:15.739 Write Uncorrectable Command: Not Supported 00:23:15.739 Dataset Management Command: Supported 00:23:15.739 Write Zeroes Command: Supported 00:23:15.739 Set Features Save Field: Not Supported 00:23:15.739 Reservations: Supported 00:23:15.739 Timestamp: Not Supported 00:23:15.739 Copy: Supported 00:23:15.739 Volatile Write Cache: Present 00:23:15.739 Atomic Write Unit (Normal): 1 00:23:15.739 Atomic Write Unit (PFail): 1 00:23:15.739 Atomic Compare & Write Unit: 1 00:23:15.739 Fused Compare & Write: Supported 00:23:15.739 Scatter-Gather List 00:23:15.739 SGL Command Set: Supported 00:23:15.739 SGL Keyed: Supported 00:23:15.739 SGL Bit Bucket Descriptor: Not Supported 00:23:15.739 SGL Metadata Pointer: Not Supported 00:23:15.739 Oversized SGL: Not Supported 00:23:15.739 SGL Metadata Address: Not Supported 00:23:15.739 SGL Offset: Supported 00:23:15.739 Transport SGL Data Block: Not Supported 00:23:15.739 Replay Protected Memory Block: Not Supported 00:23:15.739 00:23:15.739 Firmware Slot Information 00:23:15.739 ========================= 00:23:15.739 Active slot: 1 00:23:15.739 Slot 1 Firmware Revision: 25.01 00:23:15.739 00:23:15.739 00:23:15.739 Commands Supported and Effects 00:23:15.739 ============================== 00:23:15.739 Admin Commands 00:23:15.739 -------------- 00:23:15.739 Get Log Page (02h): Supported 00:23:15.739 Identify (06h): Supported 00:23:15.739 Abort (08h): Supported 00:23:15.739 Set Features (09h): Supported 00:23:15.739 Get Features (0Ah): Supported 00:23:15.739 Asynchronous Event Request (0Ch): Supported 00:23:15.739 Keep Alive (18h): Supported 00:23:15.739 I/O Commands 00:23:15.739 ------------ 00:23:15.739 Flush (00h): Supported LBA-Change 00:23:15.739 Write (01h): Supported LBA-Change 00:23:15.739 Read (02h): Supported 00:23:15.739 Compare (05h): Supported 00:23:15.739 Write Zeroes (08h): Supported LBA-Change 00:23:15.739 Dataset Management (09h): Supported LBA-Change 00:23:15.739 Copy (19h): Supported LBA-Change 00:23:15.739 00:23:15.739 Error Log 00:23:15.739 ========= 00:23:15.739 00:23:15.739 Arbitration 00:23:15.739 =========== 00:23:15.739 Arbitration Burst: 1 00:23:15.739 00:23:15.739 Power Management 00:23:15.739 ================ 00:23:15.739 Number of Power States: 1 00:23:15.739 Current Power State: Power State #0 00:23:15.739 Power State #0: 00:23:15.739 Max Power: 0.00 W 00:23:15.739 Non-Operational State: Operational 00:23:15.739 Entry Latency: Not Reported 00:23:15.739 Exit Latency: Not Reported 00:23:15.739 Relative Read Throughput: 0 00:23:15.739 Relative Read Latency: 0 00:23:15.739 Relative Write Throughput: 0 00:23:15.739 Relative Write Latency: 0 00:23:15.739 Idle Power: Not Reported 00:23:15.739 Active Power: Not Reported 00:23:15.739 Non-Operational Permissive Mode: Not Supported 00:23:15.739 00:23:15.739 Health Information 00:23:15.739 ================== 00:23:15.739 Critical Warnings: 00:23:15.739 Available Spare Space: OK 00:23:15.739 Temperature: OK 00:23:15.739 Device Reliability: OK 00:23:15.739 Read Only: No 00:23:15.739 Volatile Memory Backup: OK 00:23:15.739 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:15.739 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:15.739 Available Spare: 0% 00:23:15.739 Available Spare Threshold: 0% 00:23:15.739 Life Percentage Used:[2024-11-19 10:50:23.112072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.739 [2024-11-19 10:50:23.112077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ead690) 00:23:15.739 [2024-11-19 10:50:23.112083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.739 [2024-11-19 10:50:23.112094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0fb80, cid 7, qid 0 00:23:15.739 [2024-11-19 10:50:23.112178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.739 [2024-11-19 10:50:23.112184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.739 [2024-11-19 10:50:23.112187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.739 [2024-11-19 10:50:23.112191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0fb80) on tqpair=0x1ead690 00:23:15.739 [2024-11-19 10:50:23.112216] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:15.739 [2024-11-19 10:50:23.112224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f100) on tqpair=0x1ead690 00:23:15.739 [2024-11-19 10:50:23.112230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.739 [2024-11-19 10:50:23.112234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f280) on tqpair=0x1ead690 00:23:15.739 [2024-11-19 10:50:23.112242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.739 [2024-11-19 10:50:23.112246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f400) on tqpair=0x1ead690 00:23:15.739 [2024-11-19 10:50:23.112250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.739 [2024-11-19 10:50:23.112254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.739 [2024-11-19 10:50:23.112258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.739 [2024-11-19 10:50:23.112265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.739 [2024-11-19 10:50:23.112268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.739 [2024-11-19 10:50:23.112271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.739 [2024-11-19 10:50:23.112277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.739 [2024-11-19 10:50:23.112289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.739 [2024-11-19 10:50:23.112348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.739 [2024-11-19 10:50:23.112354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.739 [2024-11-19 10:50:23.112357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.739 [2024-11-19 10:50:23.112361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.739 [2024-11-19 10:50:23.112366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.739 [2024-11-19 10:50:23.112369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.739 [2024-11-19 10:50:23.112372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.739 [2024-11-19 10:50:23.112378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.112390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.112466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.112472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.112474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.112482] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:15.740 [2024-11-19 10:50:23.112486] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:15.740 [2024-11-19 10:50:23.112494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.112506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.112517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.112582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.112588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.112591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.112602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.112616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.112626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.112700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.112705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.112708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.112720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.112733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.112742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.112806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.112811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.112814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.112826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.112839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.112848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.112909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.112914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.112918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.112928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.112935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.112941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.112955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.113027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.113033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.113036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.113047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.113062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.113072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.113144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.113150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.113153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.113164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.113177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.113186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.113250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.113256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.113259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.113271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.113283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.113293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.113356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.113361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.113364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.113376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.113388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.113397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.113455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.113461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.113464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.113475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.113489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.113498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.113573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.113579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.113582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.740 [2024-11-19 10:50:23.113593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.740 [2024-11-19 10:50:23.113605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.740 [2024-11-19 10:50:23.113614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.740 [2024-11-19 10:50:23.113680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.740 [2024-11-19 10:50:23.113686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.740 [2024-11-19 10:50:23.113689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.740 [2024-11-19 10:50:23.113692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.113701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.113704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.113707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.113713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.113722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.113783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.113789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.113792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.113795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.113803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.113807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.113810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.113815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.113825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.113885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.113891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.113894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.113897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.113906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.113910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.113913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.113918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.113930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.114900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.114907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.114911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.741 [2024-11-19 10:50:23.114922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.741 [2024-11-19 10:50:23.114928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.741 [2024-11-19 10:50:23.114934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.741 [2024-11-19 10:50:23.114944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.741 [2024-11-19 10:50:23.118959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.741 [2024-11-19 10:50:23.118965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.741 [2024-11-19 10:50:23.118968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.742 [2024-11-19 10:50:23.118971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.742 [2024-11-19 10:50:23.118981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:15.742 [2024-11-19 10:50:23.118984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:15.742 [2024-11-19 10:50:23.118988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ead690) 00:23:15.742 [2024-11-19 10:50:23.118993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.742 [2024-11-19 10:50:23.119004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f0f580, cid 3, qid 0 00:23:15.742 [2024-11-19 10:50:23.119166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:15.742 [2024-11-19 10:50:23.119172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:15.742 [2024-11-19 10:50:23.119175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:15.742 [2024-11-19 10:50:23.119178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f0f580) on tqpair=0x1ead690 00:23:15.742 [2024-11-19 10:50:23.119185] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:23:15.742 0% 00:23:15.742 Data Units Read: 0 00:23:15.742 Data Units Written: 0 00:23:15.742 Host Read Commands: 0 00:23:15.742 Host Write Commands: 0 00:23:15.742 Controller Busy Time: 0 minutes 00:23:15.742 Power Cycles: 0 00:23:15.742 Power On Hours: 0 hours 00:23:15.742 Unsafe Shutdowns: 0 00:23:15.742 Unrecoverable Media Errors: 0 00:23:15.742 Lifetime Error Log Entries: 0 00:23:15.742 Warning Temperature Time: 0 minutes 00:23:15.742 Critical Temperature Time: 0 minutes 00:23:15.742 00:23:15.742 Number of Queues 00:23:15.742 ================ 00:23:15.742 Number of I/O Submission Queues: 127 00:23:15.742 Number of I/O Completion Queues: 127 00:23:15.742 00:23:15.742 Active Namespaces 00:23:15.742 ================= 00:23:15.742 Namespace ID:1 00:23:15.742 Error Recovery Timeout: Unlimited 00:23:15.742 Command Set Identifier: NVM (00h) 00:23:15.742 Deallocate: Supported 00:23:15.742 Deallocated/Unwritten Error: Not Supported 00:23:15.742 Deallocated Read Value: Unknown 00:23:15.742 Deallocate in Write Zeroes: Not Supported 00:23:15.742 Deallocated Guard Field: 0xFFFF 00:23:15.742 Flush: Supported 00:23:15.742 Reservation: Supported 00:23:15.742 Namespace Sharing Capabilities: Multiple Controllers 00:23:15.742 Size (in LBAs): 131072 (0GiB) 00:23:15.742 Capacity (in LBAs): 131072 (0GiB) 00:23:15.742 Utilization (in LBAs): 131072 (0GiB) 00:23:15.742 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:15.742 EUI64: ABCDEF0123456789 00:23:15.742 UUID: 8c1d15b0-7e79-480c-a1bb-000d6f7d3325 00:23:15.742 Thin Provisioning: Not Supported 00:23:15.742 Per-NS Atomic Units: Yes 00:23:15.742 Atomic Boundary Size (Normal): 0 00:23:15.742 Atomic Boundary Size (PFail): 0 00:23:15.742 Atomic Boundary Offset: 0 00:23:15.742 Maximum Single Source Range Length: 65535 00:23:15.742 Maximum Copy Length: 65535 00:23:15.742 Maximum Source Range Count: 1 00:23:15.742 NGUID/EUI64 Never Reused: No 00:23:15.742 Namespace Write Protected: No 00:23:15.742 Number of LBA Formats: 1 00:23:15.742 Current LBA Format: LBA Format #00 00:23:15.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:15.742 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.742 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.742 rmmod nvme_tcp 00:23:16.001 rmmod nvme_fabrics 00:23:16.001 rmmod nvme_keyring 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1772607 ']' 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1772607 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1772607 ']' 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1772607 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1772607 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1772607' 00:23:16.001 killing process with pid 1772607 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1772607 00:23:16.001 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1772607 00:23:16.260 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:16.260 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.260 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.261 10:50:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.166 10:50:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.166 00:23:18.166 real 0m9.374s 00:23:18.166 user 0m5.839s 00:23:18.166 sys 0m4.810s 00:23:18.166 10:50:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.166 10:50:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:18.166 ************************************ 00:23:18.166 END TEST nvmf_identify 00:23:18.166 ************************************ 00:23:18.166 10:50:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:18.166 10:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.166 10:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.166 10:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.166 ************************************ 00:23:18.166 START TEST nvmf_perf 00:23:18.166 ************************************ 00:23:18.167 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:18.426 * Looking for test storage... 00:23:18.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:18.426 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:18.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.427 --rc genhtml_branch_coverage=1 00:23:18.427 --rc genhtml_function_coverage=1 00:23:18.427 --rc genhtml_legend=1 00:23:18.427 --rc geninfo_all_blocks=1 00:23:18.427 --rc geninfo_unexecuted_blocks=1 00:23:18.427 00:23:18.427 ' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:18.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.427 --rc genhtml_branch_coverage=1 00:23:18.427 --rc genhtml_function_coverage=1 00:23:18.427 --rc genhtml_legend=1 00:23:18.427 --rc geninfo_all_blocks=1 00:23:18.427 --rc geninfo_unexecuted_blocks=1 00:23:18.427 00:23:18.427 ' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:18.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.427 --rc genhtml_branch_coverage=1 00:23:18.427 --rc genhtml_function_coverage=1 00:23:18.427 --rc genhtml_legend=1 00:23:18.427 --rc geninfo_all_blocks=1 00:23:18.427 --rc geninfo_unexecuted_blocks=1 00:23:18.427 00:23:18.427 ' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:18.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.427 --rc genhtml_branch_coverage=1 00:23:18.427 --rc genhtml_function_coverage=1 00:23:18.427 --rc genhtml_legend=1 00:23:18.427 --rc geninfo_all_blocks=1 00:23:18.427 --rc geninfo_unexecuted_blocks=1 00:23:18.427 00:23:18.427 ' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.427 10:50:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:24.999 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:24.999 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:24.999 Found net devices under 0000:86:00.0: cvl_0_0 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:24.999 Found net devices under 0000:86:00.1: cvl_0_1 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:23:24.999 00:23:24.999 --- 10.0.0.2 ping statistics --- 00:23:24.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.999 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:24.999 00:23:24.999 --- 10.0.0.1 ping statistics --- 00:23:24.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.999 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1776164 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1776164 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1776164 ']' 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:24.999 [2024-11-19 10:50:31.768118] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:24.999 [2024-11-19 10:50:31.768163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.999 [2024-11-19 10:50:31.847929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.999 [2024-11-19 10:50:31.891390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.999 [2024-11-19 10:50:31.891425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.999 [2024-11-19 10:50:31.891432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.999 [2024-11-19 10:50:31.891439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.999 [2024-11-19 10:50:31.891444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.999 [2024-11-19 10:50:31.893035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.999 [2024-11-19 10:50:31.893144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.999 [2024-11-19 10:50:31.893252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.999 [2024-11-19 10:50:31.893253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.999 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:24.999 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.999 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:24.999 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:28.287 [2024-11-19 10:50:35.671132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.287 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.545 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:28.545 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.804 10:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:28.804 10:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:29.063 10:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.063 [2024-11-19 10:50:36.490175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.321 10:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:29.321 10:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:29.321 10:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:29.321 10:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:29.321 10:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:30.699 Initializing NVMe Controllers 00:23:30.699 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:30.699 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:30.699 Initialization complete. Launching workers. 00:23:30.699 ======================================================== 00:23:30.699 Latency(us) 00:23:30.699 Device Information : IOPS MiB/s Average min max 00:23:30.699 PCIE (0000:5e:00.0) NSID 1 from core 0: 97641.48 381.41 327.19 30.71 7201.63 00:23:30.699 ======================================================== 00:23:30.699 Total : 97641.48 381.41 327.19 30.71 7201.63 00:23:30.699 00:23:30.699 10:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:32.077 Initializing NVMe Controllers 00:23:32.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:32.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:32.077 Initialization complete. Launching workers. 00:23:32.077 ======================================================== 00:23:32.077 Latency(us) 00:23:32.077 Device Information : IOPS MiB/s Average min max 00:23:32.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 164.00 0.64 6231.14 122.34 45684.91 00:23:32.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19681.01 7187.55 47892.59 00:23:32.077 ======================================================== 00:23:32.077 Total : 215.00 0.84 9421.58 122.34 47892.59 00:23:32.077 00:23:32.077 10:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:33.454 Initializing NVMe Controllers 00:23:33.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:33.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:33.455 Initialization complete. Launching workers. 00:23:33.455 ======================================================== 00:23:33.455 Latency(us) 00:23:33.455 Device Information : IOPS MiB/s Average min max 00:23:33.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10788.81 42.14 2966.20 440.49 6330.88 00:23:33.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3962.93 15.48 8115.36 7149.72 16083.63 00:23:33.455 ======================================================== 00:23:33.455 Total : 14751.73 57.62 4349.48 440.49 16083.63 00:23:33.455 00:23:33.455 10:50:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:33.455 10:50:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:33.455 10:50:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:35.989 Initializing NVMe Controllers 00:23:35.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.990 Controller IO queue size 128, less than required. 00:23:35.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.990 Controller IO queue size 128, less than required. 00:23:35.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:35.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:35.990 Initialization complete. Launching workers. 00:23:35.990 ======================================================== 00:23:35.990 Latency(us) 00:23:35.990 Device Information : IOPS MiB/s Average min max 00:23:35.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1749.14 437.29 74045.52 41914.01 112760.85 00:23:35.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.99 154.25 219467.47 80354.56 339192.07 00:23:35.990 ======================================================== 00:23:35.990 Total : 2366.13 591.53 111965.68 41914.01 339192.07 00:23:35.990 00:23:35.990 10:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:36.248 No valid NVMe controllers or AIO or URING devices found 00:23:36.248 Initializing NVMe Controllers 00:23:36.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.249 Controller IO queue size 128, less than required. 00:23:36.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:36.249 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:36.249 Controller IO queue size 128, less than required. 00:23:36.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:36.249 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:36.249 WARNING: Some requested NVMe devices were skipped 00:23:36.249 10:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:38.784 Initializing NVMe Controllers 00:23:38.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.784 Controller IO queue size 128, less than required. 00:23:38.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.784 Controller IO queue size 128, less than required. 00:23:38.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:38.784 Initialization complete. Launching workers. 00:23:38.784 00:23:38.784 ==================== 00:23:38.784 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:38.784 TCP transport: 00:23:38.784 polls: 14136 00:23:38.784 idle_polls: 10810 00:23:38.784 sock_completions: 3326 00:23:38.784 nvme_completions: 6483 00:23:38.784 submitted_requests: 9728 00:23:38.784 queued_requests: 1 00:23:38.784 00:23:38.784 ==================== 00:23:38.784 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:38.784 TCP transport: 00:23:38.784 polls: 14399 00:23:38.784 idle_polls: 10988 00:23:38.784 sock_completions: 3411 00:23:38.784 nvme_completions: 6449 00:23:38.784 submitted_requests: 9640 00:23:38.784 queued_requests: 1 00:23:38.784 ======================================================== 00:23:38.785 Latency(us) 00:23:38.785 Device Information : IOPS MiB/s Average min max 00:23:38.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1619.55 404.89 80789.14 51618.26 121692.94 00:23:38.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1611.05 402.76 80557.49 41672.94 125784.37 00:23:38.785 ======================================================== 00:23:38.785 Total : 3230.60 807.65 80673.62 41672.94 125784.37 00:23:38.785 00:23:38.785 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:38.785 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.044 rmmod nvme_tcp 00:23:39.044 rmmod nvme_fabrics 00:23:39.044 rmmod nvme_keyring 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1776164 ']' 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1776164 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1776164 ']' 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1776164 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.044 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1776164 00:23:39.303 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.303 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.303 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1776164' 00:23:39.303 killing process with pid 1776164 00:23:39.303 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1776164 00:23:39.303 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1776164 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.680 10:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.217 00:23:43.217 real 0m24.449s 00:23:43.217 user 1m3.876s 00:23:43.217 sys 0m8.355s 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:43.217 ************************************ 00:23:43.217 END TEST nvmf_perf 00:23:43.217 ************************************ 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.217 ************************************ 00:23:43.217 START TEST nvmf_fio_host 00:23:43.217 ************************************ 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:43.217 * Looking for test storage... 00:23:43.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:43.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.217 --rc genhtml_branch_coverage=1 00:23:43.217 --rc genhtml_function_coverage=1 00:23:43.217 --rc genhtml_legend=1 00:23:43.217 --rc geninfo_all_blocks=1 00:23:43.217 --rc geninfo_unexecuted_blocks=1 00:23:43.217 00:23:43.217 ' 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:43.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.217 --rc genhtml_branch_coverage=1 00:23:43.217 --rc genhtml_function_coverage=1 00:23:43.217 --rc genhtml_legend=1 00:23:43.217 --rc geninfo_all_blocks=1 00:23:43.217 --rc geninfo_unexecuted_blocks=1 00:23:43.217 00:23:43.217 ' 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:43.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.217 --rc genhtml_branch_coverage=1 00:23:43.217 --rc genhtml_function_coverage=1 00:23:43.217 --rc genhtml_legend=1 00:23:43.217 --rc geninfo_all_blocks=1 00:23:43.217 --rc geninfo_unexecuted_blocks=1 00:23:43.217 00:23:43.217 ' 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:43.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.217 --rc genhtml_branch_coverage=1 00:23:43.217 --rc genhtml_function_coverage=1 00:23:43.217 --rc genhtml_legend=1 00:23:43.217 --rc geninfo_all_blocks=1 00:23:43.217 --rc geninfo_unexecuted_blocks=1 00:23:43.217 00:23:43.217 ' 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.217 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.218 10:50:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:49.788 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:49.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:49.788 Found net devices under 0000:86:00.0: cvl_0_0 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:49.788 Found net devices under 0000:86:00.1: cvl_0_1 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.788 10:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:49.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:23:49.788 00:23:49.788 --- 10.0.0.2 ping statistics --- 00:23:49.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.788 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:23:49.788 00:23:49.788 --- 10.0.0.1 ping statistics --- 00:23:49.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.788 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1782306 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1782306 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1782306 ']' 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.788 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.789 [2024-11-19 10:50:56.302168] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:23:49.789 [2024-11-19 10:50:56.302226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.789 [2024-11-19 10:50:56.381069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.789 [2024-11-19 10:50:56.422608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.789 [2024-11-19 10:50:56.422646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.789 [2024-11-19 10:50:56.422654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.789 [2024-11-19 10:50:56.422661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.789 [2024-11-19 10:50:56.422667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.789 [2024-11-19 10:50:56.424260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.789 [2024-11-19 10:50:56.424367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.789 [2024-11-19 10:50:56.424471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.789 [2024-11-19 10:50:56.424471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:49.789 [2024-11-19 10:50:56.702721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:49.789 Malloc1 00:23:49.789 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:49.789 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:50.047 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.305 [2024-11-19 10:50:57.558450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.305 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:50.565 10:50:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:50.824 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:50.824 fio-3.35 00:23:50.824 Starting 1 thread 00:23:53.356 00:23:53.356 test: (groupid=0, jobs=1): err= 0: pid=1782857: Tue Nov 19 10:51:00 2024 00:23:53.356 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(90.4MiB/2006msec) 00:23:53.356 slat (nsec): min=1576, max=239695, avg=1727.69, stdev=2225.60 00:23:53.356 clat (usec): min=3192, max=10367, avg=6131.96, stdev=472.39 00:23:53.356 lat (usec): min=3224, max=10368, avg=6133.68, stdev=472.26 00:23:53.356 clat percentiles (usec): 00:23:53.356 | 1.00th=[ 5080], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:23:53.356 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:23:53.356 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:23:53.356 | 99.00th=[ 7242], 99.50th=[ 7504], 99.90th=[ 8455], 99.95th=[ 9634], 00:23:53.356 | 99.99th=[10290] 00:23:53.356 bw ( KiB/s): min=45304, max=46760, per=100.00%, avg=46180.00, stdev=623.44, samples=4 00:23:53.356 iops : min=11326, max=11690, avg=11545.00, stdev=155.86, samples=4 00:23:53.356 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(89.9MiB/2006msec); 0 zone resets 00:23:53.356 slat (nsec): min=1612, max=226341, avg=1792.29, stdev=1654.05 00:23:53.356 clat (usec): min=2444, max=9802, avg=4948.24, stdev=393.53 00:23:53.356 lat (usec): min=2460, max=9804, avg=4950.03, stdev=393.45 00:23:53.356 clat percentiles (usec): 00:23:53.356 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:23:53.356 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:23:53.356 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 00:23:53.356 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 8094], 99.95th=[ 9110], 00:23:53.356 | 99.99th=[ 9765] 00:23:53.356 bw ( KiB/s): min=45448, max=46528, per=100.00%, avg=45878.00, stdev=496.40, samples=4 00:23:53.356 iops : min=11362, max=11632, avg=11469.50, stdev=124.10, samples=4 00:23:53.356 lat (msec) : 4=0.32%, 10=99.67%, 20=0.01% 00:23:53.356 cpu : usr=73.57%, sys=25.39%, ctx=89, majf=0, minf=3 00:23:53.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:53.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:53.356 issued rwts: total=23154,23002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:53.356 00:23:53.356 Run status group 0 (all jobs): 00:23:53.356 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.4MiB (94.8MB), run=2006-2006msec 00:23:53.356 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=89.9MiB (94.2MB), run=2006-2006msec 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:53.356 10:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:53.356 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:53.356 fio-3.35 00:23:53.356 Starting 1 thread 00:23:54.734 [2024-11-19 10:51:02.121176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15498b0 is same with the state(6) to be set 00:23:54.734 [2024-11-19 10:51:02.121233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15498b0 is same with the state(6) to be set 00:23:55.672 00:23:55.672 test: (groupid=0, jobs=1): err= 0: pid=1783492: Tue Nov 19 10:51:03 2024 00:23:55.672 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(341MiB/2007msec) 00:23:55.672 slat (nsec): min=2527, max=86941, avg=2860.50, stdev=1319.37 00:23:55.672 clat (usec): min=1789, max=12797, avg=6824.16, stdev=1569.98 00:23:55.672 lat (usec): min=1792, max=12800, avg=6827.02, stdev=1570.07 00:23:55.672 clat percentiles (usec): 00:23:55.672 | 1.00th=[ 3490], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5473], 00:23:55.672 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7242], 00:23:55.672 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[ 9634], 00:23:55.672 | 99.00th=[11076], 99.50th=[11469], 99.90th=[11994], 99.95th=[12518], 00:23:55.672 | 99.99th=[12780] 00:23:55.672 bw ( KiB/s): min=81856, max=92896, per=50.44%, avg=87632.00, stdev=5937.44, samples=4 00:23:55.672 iops : min= 5116, max= 5806, avg=5477.00, stdev=371.09, samples=4 00:23:55.672 write: IOPS=6337, BW=99.0MiB/s (104MB/s)(179MiB/1811msec); 0 zone resets 00:23:55.672 slat (usec): min=30, max=254, avg=32.04, stdev= 6.35 00:23:55.672 clat (usec): min=3769, max=15587, avg=8686.26, stdev=1494.30 00:23:55.672 lat (usec): min=3800, max=15618, avg=8718.30, stdev=1495.19 00:23:55.672 clat percentiles (usec): 00:23:55.672 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7439], 00:23:55.672 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:55.672 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11338], 00:23:55.672 | 99.00th=[12518], 99.50th=[12780], 99.90th=[14615], 99.95th=[14877], 00:23:55.672 | 99.99th=[15533] 00:23:55.672 bw ( KiB/s): min=85120, max=96544, per=89.97%, avg=91232.00, stdev=5617.01, samples=4 00:23:55.672 iops : min= 5320, max= 6034, avg=5702.00, stdev=351.06, samples=4 00:23:55.672 lat (msec) : 2=0.02%, 4=1.82%, 10=89.05%, 20=9.11% 00:23:55.672 cpu : usr=83.70%, sys=14.16%, ctx=138, majf=0, minf=3 00:23:55.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:55.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:55.672 issued rwts: total=21794,11477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:55.672 00:23:55.672 Run status group 0 (all jobs): 00:23:55.672 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=341MiB (357MB), run=2007-2007msec 00:23:55.672 WRITE: bw=99.0MiB/s (104MB/s), 99.0MiB/s-99.0MiB/s (104MB/s-104MB/s), io=179MiB (188MB), run=1811-1811msec 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.931 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.931 rmmod nvme_tcp 00:23:55.931 rmmod nvme_fabrics 00:23:56.191 rmmod nvme_keyring 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1782306 ']' 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1782306 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1782306 ']' 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1782306 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1782306 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1782306' 00:23:56.191 killing process with pid 1782306 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1782306 00:23:56.191 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1782306 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.450 10:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.355 10:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.355 00:23:58.355 real 0m15.602s 00:23:58.355 user 0m45.930s 00:23:58.355 sys 0m6.415s 00:23:58.355 10:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.355 10:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.355 ************************************ 00:23:58.355 END TEST nvmf_fio_host 00:23:58.355 ************************************ 00:23:58.355 10:51:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:58.355 10:51:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:58.355 10:51:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.355 10:51:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.355 ************************************ 00:23:58.355 START TEST nvmf_failover 00:23:58.355 ************************************ 00:23:58.355 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:58.615 * Looking for test storage... 00:23:58.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:58.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.615 --rc genhtml_branch_coverage=1 00:23:58.615 --rc genhtml_function_coverage=1 00:23:58.615 --rc genhtml_legend=1 00:23:58.615 --rc geninfo_all_blocks=1 00:23:58.615 --rc geninfo_unexecuted_blocks=1 00:23:58.615 00:23:58.615 ' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:58.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.615 --rc genhtml_branch_coverage=1 00:23:58.615 --rc genhtml_function_coverage=1 00:23:58.615 --rc genhtml_legend=1 00:23:58.615 --rc geninfo_all_blocks=1 00:23:58.615 --rc geninfo_unexecuted_blocks=1 00:23:58.615 00:23:58.615 ' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:58.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.615 --rc genhtml_branch_coverage=1 00:23:58.615 --rc genhtml_function_coverage=1 00:23:58.615 --rc genhtml_legend=1 00:23:58.615 --rc geninfo_all_blocks=1 00:23:58.615 --rc geninfo_unexecuted_blocks=1 00:23:58.615 00:23:58.615 ' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:58.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.615 --rc genhtml_branch_coverage=1 00:23:58.615 --rc genhtml_function_coverage=1 00:23:58.615 --rc genhtml_legend=1 00:23:58.615 --rc geninfo_all_blocks=1 00:23:58.615 --rc geninfo_unexecuted_blocks=1 00:23:58.615 00:23:58.615 ' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:58.615 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.616 10:51:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.616 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.616 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.616 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.616 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:05.187 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:05.187 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:05.187 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:05.188 Found net devices under 0000:86:00.0: cvl_0_0 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:05.188 Found net devices under 0000:86:00.1: cvl_0_1 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:24:05.188 00:24:05.188 --- 10.0.0.2 ping statistics --- 00:24:05.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.188 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:24:05.188 00:24:05.188 --- 10.0.0.1 ping statistics --- 00:24:05.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.188 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1787713 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1787713 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1787713 ']' 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.188 10:51:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:05.188 [2024-11-19 10:51:12.018158] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:05.188 [2024-11-19 10:51:12.018202] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.188 [2024-11-19 10:51:12.098349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:05.188 [2024-11-19 10:51:12.140621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.188 [2024-11-19 10:51:12.140657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.188 [2024-11-19 10:51:12.140664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.188 [2024-11-19 10:51:12.140671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.188 [2024-11-19 10:51:12.140676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.188 [2024-11-19 10:51:12.142169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.188 [2024-11-19 10:51:12.142275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.188 [2024-11-19 10:51:12.142276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.188 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.188 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:05.188 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.188 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.188 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:05.188 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.188 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:05.188 [2024-11-19 10:51:12.455350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.188 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:05.447 Malloc0 00:24:05.447 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.707 10:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.707 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.967 [2024-11-19 10:51:13.262640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.967 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:06.225 [2024-11-19 10:51:13.455183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:06.225 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:06.225 [2024-11-19 10:51:13.667903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1788103 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1788103 /var/tmp/bdevperf.sock 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1788103 ']' 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.484 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:06.743 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.743 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:06.743 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:07.002 NVMe0n1 00:24:07.002 10:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:07.261 00:24:07.261 10:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:07.261 10:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1788197 00:24:07.261 10:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:08.642 10:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.642 [2024-11-19 10:51:15.841306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 [2024-11-19 10:51:15.841567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f2d0 is same with the state(6) to be set 00:24:08.643 10:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:11.933 10:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:11.933 00:24:11.933 10:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:12.191 [2024-11-19 10:51:19.385183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.191 [2024-11-19 10:51:19.385285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 [2024-11-19 10:51:19.385586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050060 is same with the state(6) to be set 00:24:12.192 10:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:15.474 10:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.474 [2024-11-19 10:51:22.600480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.475 10:51:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:16.410 10:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:16.410 [2024-11-19 10:51:23.825129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.410 [2024-11-19 10:51:23.825357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 [2024-11-19 10:51:23.825453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050e30 is same with the state(6) to be set 00:24:16.411 10:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1788197 00:24:22.988 { 00:24:22.988 "results": [ 00:24:22.988 { 00:24:22.988 "job": "NVMe0n1", 00:24:22.988 "core_mask": "0x1", 00:24:22.988 "workload": "verify", 00:24:22.988 "status": "finished", 00:24:22.988 "verify_range": { 00:24:22.988 "start": 0, 00:24:22.988 "length": 16384 00:24:22.988 }, 00:24:22.988 "queue_depth": 128, 00:24:22.988 "io_size": 4096, 00:24:22.988 "runtime": 15.006254, 00:24:22.988 "iops": 10920.313623906406, 00:24:22.988 "mibps": 42.6574750933844, 00:24:22.988 "io_failed": 9893, 00:24:22.988 "io_timeout": 0, 00:24:22.988 "avg_latency_us": 11031.803688258422, 00:24:22.988 "min_latency_us": 429.1895652173913, 00:24:22.988 "max_latency_us": 33052.93913043478 00:24:22.988 } 00:24:22.988 ], 00:24:22.988 "core_count": 1 00:24:22.988 } 00:24:22.988 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1788103 00:24:22.988 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1788103 ']' 00:24:22.988 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1788103 00:24:22.988 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:22.988 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.988 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1788103 00:24:22.988 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.989 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.989 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1788103' 00:24:22.989 killing process with pid 1788103 00:24:22.989 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1788103 00:24:22.989 10:51:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1788103 00:24:22.989 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.989 [2024-11-19 10:51:13.744842] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:22.989 [2024-11-19 10:51:13.744900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788103 ] 00:24:22.989 [2024-11-19 10:51:13.819834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.989 [2024-11-19 10:51:13.861356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.989 Running I/O for 15 seconds... 00:24:22.989 10933.00 IOPS, 42.71 MiB/s [2024-11-19T09:51:30.438Z] [2024-11-19 10:51:15.842339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.989 [2024-11-19 10:51:15.842737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.989 [2024-11-19 10:51:15.842752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.989 [2024-11-19 10:51:15.842888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.989 [2024-11-19 10:51:15.842896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.842903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.842912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.842919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.842929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.842936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.842944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.842957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.842965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.842972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.842980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.842987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.842995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.990 [2024-11-19 10:51:15.843349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.990 [2024-11-19 10:51:15.843499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.990 [2024-11-19 10:51:15.843506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.843990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.843998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.844006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.844014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.844020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.844029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.844035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.844043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.844050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.844058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.844064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.844073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.844080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.844090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.844098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.991 [2024-11-19 10:51:15.844106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.991 [2024-11-19 10:51:15.844113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.992 [2024-11-19 10:51:15.844305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.992 [2024-11-19 10:51:15.844344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.992 [2024-11-19 10:51:15.844350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 00:24:22.992 [2024-11-19 10:51:15.844358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844402] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:22.992 [2024-11-19 10:51:15.844425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.992 [2024-11-19 10:51:15.844433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.992 [2024-11-19 10:51:15.844447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.992 [2024-11-19 10:51:15.844461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.992 [2024-11-19 10:51:15.844474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:15.844480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:22.992 [2024-11-19 10:51:15.844519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce340 (9): Bad file descriptor 00:24:22.992 [2024-11-19 10:51:15.847344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:22.992 [2024-11-19 10:51:15.872303] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:22.992 10880.50 IOPS, 42.50 MiB/s [2024-11-19T09:51:30.441Z] 10985.00 IOPS, 42.91 MiB/s [2024-11-19T09:51:30.441Z] 11051.50 IOPS, 43.17 MiB/s [2024-11-19T09:51:30.441Z] [2024-11-19 10:51:19.385884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.385916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.385930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.385938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.385957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.385964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.385973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.385980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.385988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.385995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.992 [2024-11-19 10:51:19.386187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.992 [2024-11-19 10:51:19.386194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.993 [2024-11-19 10:51:19.386613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.993 [2024-11-19 10:51:19.386628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.993 [2024-11-19 10:51:19.386643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.993 [2024-11-19 10:51:19.386658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.993 [2024-11-19 10:51:19.386673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.993 [2024-11-19 10:51:19.386688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.993 [2024-11-19 10:51:19.386696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.386991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.386999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.994 [2024-11-19 10:51:19.387277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.994 [2024-11-19 10:51:19.387285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.995 [2024-11-19 10:51:19.387785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.995 [2024-11-19 10:51:19.387799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.995 [2024-11-19 10:51:19.387814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.995 [2024-11-19 10:51:19.387875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.995 [2024-11-19 10:51:19.387893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.995 [2024-11-19 10:51:19.387907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.995 [2024-11-19 10:51:19.387914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.995 [2024-11-19 10:51:19.387921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.387927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce340 is same with the state(6) to be set 00:24:22.996 [2024-11-19 10:51:19.388150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.388158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.388165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29416 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.388172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.388181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.388186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.388192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29032 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.388199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.388206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.388211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.388217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29040 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.388224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.388230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.388236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.388241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29048 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.388248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29056 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29064 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29072 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29080 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29088 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29096 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29104 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29112 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29120 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29128 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29136 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29144 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29152 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29160 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29168 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29176 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29184 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29192 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.996 [2024-11-19 10:51:19.399890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.996 [2024-11-19 10:51:19.399897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.996 [2024-11-19 10:51:19.399905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29200 len:8 PRP1 0x0 PRP2 0x0 00:24:22.996 [2024-11-19 10:51:19.399914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.399923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.399930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.399938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29208 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.399950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.399960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.399967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.399975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29216 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.399984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.399993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29224 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29232 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29240 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29248 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29256 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29264 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29272 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29280 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29288 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29296 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29304 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29312 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29320 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29328 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29336 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29344 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29352 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29360 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.997 [2024-11-19 10:51:19.400587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.997 [2024-11-19 10:51:19.400594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29368 len:8 PRP1 0x0 PRP2 0x0 00:24:22.997 [2024-11-19 10:51:19.400603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.997 [2024-11-19 10:51:19.400612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29376 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29384 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29392 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29424 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29432 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29448 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29456 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29464 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29472 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29480 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.400969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.400978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.400985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.400993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29488 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29496 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29504 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29512 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29520 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29528 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29536 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29544 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29552 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29560 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29568 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.998 [2024-11-19 10:51:19.401348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29576 len:8 PRP1 0x0 PRP2 0x0 00:24:22.998 [2024-11-19 10:51:19.401357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.998 [2024-11-19 10:51:19.401366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.998 [2024-11-19 10:51:19.401373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.401382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29584 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.401391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.401400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.401407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.401414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29592 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.401423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.401432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.401439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.401446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29600 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.401454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.401463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.401473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.401481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29608 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.401489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.401498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.401505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.401512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29616 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.401521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.401530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.401537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.401544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29624 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.407887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.407906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.407915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.407926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29632 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.407937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.407955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.407965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.407975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29640 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.407986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.407999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29648 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29656 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29664 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29672 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29680 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29688 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29696 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29704 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29712 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29720 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29728 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29736 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29744 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29752 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.999 [2024-11-19 10:51:19.408608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.999 [2024-11-19 10:51:19.408618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29760 len:8 PRP1 0x0 PRP2 0x0 00:24:22.999 [2024-11-19 10:51:19.408629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.999 [2024-11-19 10:51:19.408642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.408650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.408660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29768 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.408674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.408686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.408695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.408704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29776 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.408716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.408729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.408738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.408748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29784 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.408760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.408773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.408781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.408791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29792 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.408803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.408815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.408824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.408835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29800 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.408846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.408858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.408868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.408878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29808 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.408890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.408902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.408910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.408921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29816 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.408932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.408945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.408960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.408970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29824 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.408982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.408994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29832 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29840 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29848 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29856 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29864 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29872 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29880 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29888 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29896 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29904 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29912 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29920 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29928 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29936 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.000 [2024-11-19 10:51:19.409619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29944 len:8 PRP1 0x0 PRP2 0x0 00:24:23.000 [2024-11-19 10:51:19.409631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.000 [2024-11-19 10:51:19.409643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.000 [2024-11-19 10:51:19.409652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.409663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29952 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.409676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.409689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.409697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.409707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29960 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.409719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.409731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.409740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.409749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29968 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.409761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.409773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.409782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.409792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29976 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.409803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.409815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.409824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.409834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29984 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.409846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.409858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.409867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.409877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29992 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.409889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.409901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.409909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.409919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30000 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.409930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.409943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.409960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.409970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30008 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.409981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.409993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.410002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.410014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30016 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.410026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.410038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.410047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.410057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30024 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.410069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.410081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.410089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.410099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30032 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.410111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.410123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.410132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.410142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30040 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.410153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.410166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.410175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.410185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30048 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.410196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.410208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.410217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.410227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29400 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.410239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.410251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.001 [2024-11-19 10:51:19.410260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.001 [2024-11-19 10:51:19.410269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29408 len:8 PRP1 0x0 PRP2 0x0 00:24:23.001 [2024-11-19 10:51:19.410281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:19.410338] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:23.001 [2024-11-19 10:51:19.410353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:23.001 [2024-11-19 10:51:19.410402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce340 (9): Bad file descriptor 00:24:23.001 [2024-11-19 10:51:19.415576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:23.001 [2024-11-19 10:51:19.532238] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:23.001 10747.00 IOPS, 41.98 MiB/s [2024-11-19T09:51:30.450Z] 10788.50 IOPS, 42.14 MiB/s [2024-11-19T09:51:30.450Z] 10835.14 IOPS, 42.32 MiB/s [2024-11-19T09:51:30.450Z] 10869.62 IOPS, 42.46 MiB/s [2024-11-19T09:51:30.450Z] 10910.44 IOPS, 42.62 MiB/s [2024-11-19T09:51:30.450Z] [2024-11-19 10:51:23.826250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.001 [2024-11-19 10:51:23.826437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.001 [2024-11-19 10:51:23.826443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.002 [2024-11-19 10:51:23.826975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.002 [2024-11-19 10:51:23.826983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.826990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.826998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.003 [2024-11-19 10:51:23.827561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.003 [2024-11-19 10:51:23.827570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.003 [2024-11-19 10:51:23.827576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.004 [2024-11-19 10:51:23.827685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.004 [2024-11-19 10:51:23.827700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.827990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.827998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.004 [2024-11-19 10:51:23.828135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.004 [2024-11-19 10:51:23.828145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.005 [2024-11-19 10:51:23.828151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.005 [2024-11-19 10:51:23.828165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.005 [2024-11-19 10:51:23.828182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.005 [2024-11-19 10:51:23.828211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61104 len:8 PRP1 0x0 PRP2 0x0 00:24:23.005 [2024-11-19 10:51:23.828217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:23.005 [2024-11-19 10:51:23.828232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:23.005 [2024-11-19 10:51:23.828239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61112 len:8 PRP1 0x0 PRP2 0x0 00:24:23.005 [2024-11-19 10:51:23.828246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828288] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:23.005 [2024-11-19 10:51:23.828310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.005 [2024-11-19 10:51:23.828317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.005 [2024-11-19 10:51:23.828332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.005 [2024-11-19 10:51:23.828346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.005 [2024-11-19 10:51:23.828359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.005 [2024-11-19 10:51:23.828365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:23.005 [2024-11-19 10:51:23.828397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce340 (9): Bad file descriptor 00:24:23.005 [2024-11-19 10:51:23.831240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:23.005 [2024-11-19 10:51:23.893132] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:23.005 10834.90 IOPS, 42.32 MiB/s [2024-11-19T09:51:30.454Z] 10847.09 IOPS, 42.37 MiB/s [2024-11-19T09:51:30.454Z] 10867.92 IOPS, 42.45 MiB/s [2024-11-19T09:51:30.454Z] 10886.69 IOPS, 42.53 MiB/s [2024-11-19T09:51:30.454Z] 10903.86 IOPS, 42.59 MiB/s 00:24:23.005 Latency(us) 00:24:23.005 [2024-11-19T09:51:30.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.005 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:23.005 Verification LBA range: start 0x0 length 0x4000 00:24:23.005 NVMe0n1 : 15.01 10920.31 42.66 659.26 0.00 11031.80 429.19 33052.94 00:24:23.005 [2024-11-19T09:51:30.454Z] =================================================================================================================== 00:24:23.005 [2024-11-19T09:51:30.454Z] Total : 10920.31 42.66 659.26 0.00 11031.80 429.19 33052.94 00:24:23.005 Received shutdown signal, test time was about 15.000000 seconds 00:24:23.005 00:24:23.005 Latency(us) 00:24:23.005 [2024-11-19T09:51:30.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.005 [2024-11-19T09:51:30.454Z] =================================================================================================================== 00:24:23.005 [2024-11-19T09:51:30.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1790718 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1790718 /var/tmp/bdevperf.sock 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1790718 ']' 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:23.005 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:23.264 [2024-11-19 10:51:30.442032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:23.264 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:23.264 [2024-11-19 10:51:30.642589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:23.264 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:23.831 NVMe0n1 00:24:23.831 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:24.090 00:24:24.348 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:24.607 00:24:24.607 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.607 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:24.866 10:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.125 10:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:28.413 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.413 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:28.413 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.413 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1791646 00:24:28.413 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1791646 00:24:29.351 { 00:24:29.351 "results": [ 00:24:29.351 { 00:24:29.351 "job": "NVMe0n1", 00:24:29.351 "core_mask": "0x1", 00:24:29.351 "workload": "verify", 00:24:29.351 "status": "finished", 00:24:29.351 "verify_range": { 00:24:29.351 "start": 0, 00:24:29.351 "length": 16384 00:24:29.351 }, 00:24:29.351 "queue_depth": 128, 00:24:29.351 "io_size": 4096, 00:24:29.351 "runtime": 1.00603, 00:24:29.351 "iops": 11078.198463266503, 00:24:29.351 "mibps": 43.274212747134776, 00:24:29.351 "io_failed": 0, 00:24:29.351 "io_timeout": 0, 00:24:29.351 "avg_latency_us": 11497.815470536601, 00:24:29.351 "min_latency_us": 2322.2539130434784, 00:24:29.351 "max_latency_us": 12765.27304347826 00:24:29.351 } 00:24:29.351 ], 00:24:29.351 "core_count": 1 00:24:29.351 } 00:24:29.351 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.351 [2024-11-19 10:51:30.058007] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:29.351 [2024-11-19 10:51:30.058059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790718 ] 00:24:29.351 [2024-11-19 10:51:30.134089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.351 [2024-11-19 10:51:30.175197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.351 [2024-11-19 10:51:32.333807] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:29.351 [2024-11-19 10:51:32.333852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.351 [2024-11-19 10:51:32.333863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.351 [2024-11-19 10:51:32.333872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.351 [2024-11-19 10:51:32.333879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.351 [2024-11-19 10:51:32.333887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.351 [2024-11-19 10:51:32.333894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.351 [2024-11-19 10:51:32.333902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.351 [2024-11-19 10:51:32.333909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.351 [2024-11-19 10:51:32.333915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:29.351 [2024-11-19 10:51:32.333940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:29.351 [2024-11-19 10:51:32.333959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c30340 (9): Bad file descriptor 00:24:29.351 [2024-11-19 10:51:32.344523] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:29.351 Running I/O for 1 seconds... 00:24:29.351 11017.00 IOPS, 43.04 MiB/s 00:24:29.351 Latency(us) 00:24:29.351 [2024-11-19T09:51:36.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.351 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:29.351 Verification LBA range: start 0x0 length 0x4000 00:24:29.351 NVMe0n1 : 1.01 11078.20 43.27 0.00 0.00 11497.82 2322.25 12765.27 00:24:29.351 [2024-11-19T09:51:36.800Z] =================================================================================================================== 00:24:29.351 [2024-11-19T09:51:36.800Z] Total : 11078.20 43.27 0.00 0.00 11497.82 2322.25 12765.27 00:24:29.351 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.351 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:29.610 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.870 10:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.870 10:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:29.870 10:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:30.129 10:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1790718 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1790718 ']' 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1790718 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1790718 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1790718' 00:24:33.417 killing process with pid 1790718 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1790718 00:24:33.417 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1790718 00:24:33.676 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:33.676 10:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.676 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.955 rmmod nvme_tcp 00:24:33.955 rmmod nvme_fabrics 00:24:33.955 rmmod nvme_keyring 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1787713 ']' 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1787713 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1787713 ']' 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1787713 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1787713 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1787713' 00:24:33.955 killing process with pid 1787713 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1787713 00:24:33.955 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1787713 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.267 10:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.278 00:24:36.278 real 0m37.704s 00:24:36.278 user 1m59.479s 00:24:36.278 sys 0m7.988s 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 ************************************ 00:24:36.278 END TEST nvmf_failover 00:24:36.278 ************************************ 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 ************************************ 00:24:36.278 START TEST nvmf_host_discovery 00:24:36.278 ************************************ 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:36.278 * Looking for test storage... 00:24:36.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:36.278 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:36.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.538 --rc genhtml_branch_coverage=1 00:24:36.538 --rc genhtml_function_coverage=1 00:24:36.538 --rc genhtml_legend=1 00:24:36.538 --rc geninfo_all_blocks=1 00:24:36.538 --rc geninfo_unexecuted_blocks=1 00:24:36.538 00:24:36.538 ' 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:36.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.538 --rc genhtml_branch_coverage=1 00:24:36.538 --rc genhtml_function_coverage=1 00:24:36.538 --rc genhtml_legend=1 00:24:36.538 --rc geninfo_all_blocks=1 00:24:36.538 --rc geninfo_unexecuted_blocks=1 00:24:36.538 00:24:36.538 ' 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:36.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.538 --rc genhtml_branch_coverage=1 00:24:36.538 --rc genhtml_function_coverage=1 00:24:36.538 --rc genhtml_legend=1 00:24:36.538 --rc geninfo_all_blocks=1 00:24:36.538 --rc geninfo_unexecuted_blocks=1 00:24:36.538 00:24:36.538 ' 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:36.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.538 --rc genhtml_branch_coverage=1 00:24:36.538 --rc genhtml_function_coverage=1 00:24:36.538 --rc genhtml_legend=1 00:24:36.538 --rc geninfo_all_blocks=1 00:24:36.538 --rc geninfo_unexecuted_blocks=1 00:24:36.538 00:24:36.538 ' 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.538 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.539 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.112 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:43.113 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:43.113 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:43.113 Found net devices under 0000:86:00.0: cvl_0_0 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:43.113 Found net devices under 0000:86:00.1: cvl_0_1 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:24:43.113 00:24:43.113 --- 10.0.0.2 ping statistics --- 00:24:43.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.113 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:24:43.113 00:24:43.113 --- 10.0.0.1 ping statistics --- 00:24:43.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.113 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1796092 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1796092 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1796092 ']' 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.113 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.113 [2024-11-19 10:51:49.753280] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:43.113 [2024-11-19 10:51:49.753327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.113 [2024-11-19 10:51:49.829540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.113 [2024-11-19 10:51:49.870199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.113 [2024-11-19 10:51:49.870236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.113 [2024-11-19 10:51:49.870243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.113 [2024-11-19 10:51:49.870249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.113 [2024-11-19 10:51:49.870254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.113 [2024-11-19 10:51:49.870794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 [2024-11-19 10:51:50.000699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 [2024-11-19 10:51:50.012891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 null0 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 null1 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1796121 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1796121 /tmp/host.sock 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1796121 ']' 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:43.114 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 [2024-11-19 10:51:50.093794] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:24:43.114 [2024-11-19 10:51:50.093844] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1796121 ] 00:24:43.114 [2024-11-19 10:51:50.169710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.114 [2024-11-19 10:51:50.211231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.114 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.373 [2024-11-19 10:51:50.638467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.373 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.374 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.633 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:43.633 10:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:44.200 [2024-11-19 10:51:51.378444] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:44.200 [2024-11-19 10:51:51.378462] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:44.200 [2024-11-19 10:51:51.378474] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.200 [2024-11-19 10:51:51.504853] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:44.459 [2024-11-19 10:51:51.679858] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:44.459 [2024-11-19 10:51:51.680660] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15dfdd0:1 started. 00:24:44.459 [2024-11-19 10:51:51.682050] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:44.459 [2024-11-19 10:51:51.682066] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.459 [2024-11-19 10:51:51.687867] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15dfdd0 was disconnected and freed. delete nvme_qpair. 00:24:44.459 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.459 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:44.459 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.460 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.719 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.719 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:44.719 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.720 10:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.720 [2024-11-19 10:51:52.052273] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15e01a0:1 started. 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.720 [2024-11-19 10:51:52.058665] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15e01a0 was disconnected and freed. delete nvme_qpair. 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.720 [2024-11-19 10:51:52.150609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.720 [2024-11-19 10:51:52.151010] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:44.720 [2024-11-19 10:51:52.151030] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.720 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:44.980 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.980 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.980 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.980 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.981 [2024-11-19 10:51:52.277417] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:44.981 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:44.981 [2024-11-19 10:51:52.376162] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:44.981 [2024-11-19 10:51:52.376196] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:44.981 [2024-11-19 10:51:52.376204] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.981 [2024-11-19 10:51:52.376208] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:45.918 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.919 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.179 [2024-11-19 10:51:53.407010] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:46.179 [2024-11-19 10:51:53.407033] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:46.179 [2024-11-19 10:51:53.414237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.179 [2024-11-19 10:51:53.414255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.179 [2024-11-19 10:51:53.414264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.179 [2024-11-19 10:51:53.414271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.179 [2024-11-19 10:51:53.414280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.179 [2024-11-19 10:51:53.414287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.179 [2024-11-19 10:51:53.414295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.179 [2024-11-19 10:51:53.414301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.179 [2024-11-19 10:51:53.414308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0390 is same with the state(6) to be set 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:46.179 [2024-11-19 10:51:53.424245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0390 (9): Bad file descriptor 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.179 [2024-11-19 10:51:53.434279] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.179 [2024-11-19 10:51:53.434291] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.179 [2024-11-19 10:51:53.434296] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.179 [2024-11-19 10:51:53.434300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.179 [2024-11-19 10:51:53.434320] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.179 [2024-11-19 10:51:53.434444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.179 [2024-11-19 10:51:53.434457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0390 with addr=10.0.0.2, port=4420 00:24:46.179 [2024-11-19 10:51:53.434465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0390 is same with the state(6) to be set 00:24:46.179 [2024-11-19 10:51:53.434476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0390 (9): Bad file descriptor 00:24:46.179 [2024-11-19 10:51:53.434486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.179 [2024-11-19 10:51:53.434492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.179 [2024-11-19 10:51:53.434500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.179 [2024-11-19 10:51:53.434506] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.179 [2024-11-19 10:51:53.434511] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.179 [2024-11-19 10:51:53.434515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.179 [2024-11-19 10:51:53.444350] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.179 [2024-11-19 10:51:53.444361] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.179 [2024-11-19 10:51:53.444365] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.179 [2024-11-19 10:51:53.444370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.179 [2024-11-19 10:51:53.444383] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.179 [2024-11-19 10:51:53.444638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.179 [2024-11-19 10:51:53.444650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0390 with addr=10.0.0.2, port=4420 00:24:46.179 [2024-11-19 10:51:53.444657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0390 is same with the state(6) to be set 00:24:46.179 [2024-11-19 10:51:53.444668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0390 (9): Bad file descriptor 00:24:46.179 [2024-11-19 10:51:53.444677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.179 [2024-11-19 10:51:53.444684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.179 [2024-11-19 10:51:53.444691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.179 [2024-11-19 10:51:53.444696] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.179 [2024-11-19 10:51:53.444701] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.179 [2024-11-19 10:51:53.444705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.179 [2024-11-19 10:51:53.454415] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.179 [2024-11-19 10:51:53.454429] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.179 [2024-11-19 10:51:53.454433] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.179 [2024-11-19 10:51:53.454440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.179 [2024-11-19 10:51:53.454455] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.179 [2024-11-19 10:51:53.454614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.179 [2024-11-19 10:51:53.454626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0390 with addr=10.0.0.2, port=4420 00:24:46.179 [2024-11-19 10:51:53.454635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0390 is same with the state(6) to be set 00:24:46.179 [2024-11-19 10:51:53.454647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0390 (9): Bad file descriptor 00:24:46.179 [2024-11-19 10:51:53.454657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.179 [2024-11-19 10:51:53.454664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.179 [2024-11-19 10:51:53.454673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.179 [2024-11-19 10:51:53.454680] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.179 [2024-11-19 10:51:53.454685] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.179 [2024-11-19 10:51:53.454690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.179 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.180 [2024-11-19 10:51:53.464487] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.180 [2024-11-19 10:51:53.464499] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.180 [2024-11-19 10:51:53.464503] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.180 [2024-11-19 10:51:53.464508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.180 [2024-11-19 10:51:53.464521] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:46.180 [2024-11-19 10:51:53.464768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.180 [2024-11-19 10:51:53.464781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0390 with addr=10.0.0.2, port=4420 00:24:46.180 [2024-11-19 10:51:53.464791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0390 is same with the state(6) to be set 00:24:46.180 [2024-11-19 10:51:53.464802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0390 (9): Bad file descriptor 00:24:46.180 [2024-11-19 10:51:53.464811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.180 [2024-11-19 10:51:53.464817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.180 [2024-11-19 10:51:53.464823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.180 [2024-11-19 10:51:53.464829] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.180 [2024-11-19 10:51:53.464833] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.180 [2024-11-19 10:51:53.464837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.180 [2024-11-19 10:51:53.474552] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.180 [2024-11-19 10:51:53.474566] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.180 [2024-11-19 10:51:53.474570] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.180 [2024-11-19 10:51:53.474574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.180 [2024-11-19 10:51:53.474588] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.180 [2024-11-19 10:51:53.474823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.180 [2024-11-19 10:51:53.474835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0390 with addr=10.0.0.2, port=4420 00:24:46.180 [2024-11-19 10:51:53.474843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0390 is same with the state(6) to be set 00:24:46.180 [2024-11-19 10:51:53.474853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0390 (9): Bad file descriptor 00:24:46.180 [2024-11-19 10:51:53.474863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.180 [2024-11-19 10:51:53.474869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.180 [2024-11-19 10:51:53.474875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.180 [2024-11-19 10:51:53.474881] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.180 [2024-11-19 10:51:53.474885] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.180 [2024-11-19 10:51:53.474889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.180 [2024-11-19 10:51:53.484618] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.180 [2024-11-19 10:51:53.484629] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.180 [2024-11-19 10:51:53.484633] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.180 [2024-11-19 10:51:53.484637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.180 [2024-11-19 10:51:53.484649] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.180 [2024-11-19 10:51:53.484894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.180 [2024-11-19 10:51:53.484909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0390 with addr=10.0.0.2, port=4420 00:24:46.180 [2024-11-19 10:51:53.484917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0390 is same with the state(6) to be set 00:24:46.180 [2024-11-19 10:51:53.484927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0390 (9): Bad file descriptor 00:24:46.180 [2024-11-19 10:51:53.484937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.180 [2024-11-19 10:51:53.484943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.180 [2024-11-19 10:51:53.484954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.180 [2024-11-19 10:51:53.484960] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.180 [2024-11-19 10:51:53.484964] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.180 [2024-11-19 10:51:53.484968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.180 [2024-11-19 10:51:53.493283] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:46.180 [2024-11-19 10:51:53.493300] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:46.180 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:46.181 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.440 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.377 [2024-11-19 10:51:54.787443] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:47.377 [2024-11-19 10:51:54.787459] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:47.377 [2024-11-19 10:51:54.787470] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:47.636 [2024-11-19 10:51:54.915875] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:47.895 [2024-11-19 10:51:55.223273] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:47.895 [2024-11-19 10:51:55.223821] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x15e61b0:1 started. 00:24:47.895 [2024-11-19 10:51:55.225435] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:47.895 [2024-11-19 10:51:55.225461] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.895 [2024-11-19 10:51:55.226931] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x15e61b0 was disconnected and freed. delete nvme_qpair. 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.895 request: 00:24:47.895 { 00:24:47.895 "name": "nvme", 00:24:47.895 "trtype": "tcp", 00:24:47.895 "traddr": "10.0.0.2", 00:24:47.895 "adrfam": "ipv4", 00:24:47.895 "trsvcid": "8009", 00:24:47.895 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:47.895 "wait_for_attach": true, 00:24:47.895 "method": "bdev_nvme_start_discovery", 00:24:47.895 "req_id": 1 00:24:47.895 } 00:24:47.895 Got JSON-RPC error response 00:24:47.895 response: 00:24:47.895 { 00:24:47.895 "code": -17, 00:24:47.895 "message": "File exists" 00:24:47.895 } 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:47.895 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.154 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:48.154 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.154 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:48.154 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.155 request: 00:24:48.155 { 00:24:48.155 "name": "nvme_second", 00:24:48.155 "trtype": "tcp", 00:24:48.155 "traddr": "10.0.0.2", 00:24:48.155 "adrfam": "ipv4", 00:24:48.155 "trsvcid": "8009", 00:24:48.155 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:48.155 "wait_for_attach": true, 00:24:48.155 "method": "bdev_nvme_start_discovery", 00:24:48.155 "req_id": 1 00:24:48.155 } 00:24:48.155 Got JSON-RPC error response 00:24:48.155 response: 00:24:48.155 { 00:24:48.155 "code": -17, 00:24:48.155 "message": "File exists" 00:24:48.155 } 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.155 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.092 [2024-11-19 10:51:56.472926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.092 [2024-11-19 10:51:56.472957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1180 with addr=10.0.0.2, port=8010 00:24:49.092 [2024-11-19 10:51:56.472973] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:49.092 [2024-11-19 10:51:56.472995] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:49.092 [2024-11-19 10:51:56.473002] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:50.029 [2024-11-19 10:51:57.475357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.029 [2024-11-19 10:51:57.475381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c77f0 with addr=10.0.0.2, port=8010 00:24:50.029 [2024-11-19 10:51:57.475404] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:50.029 [2024-11-19 10:51:57.475409] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:50.029 [2024-11-19 10:51:57.475415] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:51.408 [2024-11-19 10:51:58.477541] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:51.408 request: 00:24:51.408 { 00:24:51.408 "name": "nvme_second", 00:24:51.408 "trtype": "tcp", 00:24:51.408 "traddr": "10.0.0.2", 00:24:51.408 "adrfam": "ipv4", 00:24:51.408 "trsvcid": "8010", 00:24:51.408 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:51.408 "wait_for_attach": false, 00:24:51.408 "attach_timeout_ms": 3000, 00:24:51.408 "method": "bdev_nvme_start_discovery", 00:24:51.408 "req_id": 1 00:24:51.408 } 00:24:51.408 Got JSON-RPC error response 00:24:51.408 response: 00:24:51.408 { 00:24:51.408 "code": -110, 00:24:51.408 "message": "Connection timed out" 00:24:51.408 } 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1796121 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.408 rmmod nvme_tcp 00:24:51.408 rmmod nvme_fabrics 00:24:51.408 rmmod nvme_keyring 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1796092 ']' 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1796092 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1796092 ']' 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1796092 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1796092 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1796092' 00:24:51.408 killing process with pid 1796092 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1796092 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1796092 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.408 10:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.946 10:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.946 00:24:53.946 real 0m17.321s 00:24:53.946 user 0m20.822s 00:24:53.946 sys 0m5.739s 00:24:53.946 10:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.946 10:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.946 ************************************ 00:24:53.946 END TEST nvmf_host_discovery 00:24:53.946 ************************************ 00:24:53.946 10:52:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:53.946 10:52:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.946 10:52:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.946 10:52:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.946 ************************************ 00:24:53.946 START TEST nvmf_host_multipath_status 00:24:53.946 ************************************ 00:24:53.946 10:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:53.946 * Looking for test storage... 00:24:53.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.946 --rc genhtml_branch_coverage=1 00:24:53.946 --rc genhtml_function_coverage=1 00:24:53.946 --rc genhtml_legend=1 00:24:53.946 --rc geninfo_all_blocks=1 00:24:53.946 --rc geninfo_unexecuted_blocks=1 00:24:53.946 00:24:53.946 ' 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.946 --rc genhtml_branch_coverage=1 00:24:53.946 --rc genhtml_function_coverage=1 00:24:53.946 --rc genhtml_legend=1 00:24:53.946 --rc geninfo_all_blocks=1 00:24:53.946 --rc geninfo_unexecuted_blocks=1 00:24:53.946 00:24:53.946 ' 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.946 --rc genhtml_branch_coverage=1 00:24:53.946 --rc genhtml_function_coverage=1 00:24:53.946 --rc genhtml_legend=1 00:24:53.946 --rc geninfo_all_blocks=1 00:24:53.946 --rc geninfo_unexecuted_blocks=1 00:24:53.946 00:24:53.946 ' 00:24:53.946 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.946 --rc genhtml_branch_coverage=1 00:24:53.946 --rc genhtml_function_coverage=1 00:24:53.946 --rc genhtml_legend=1 00:24:53.947 --rc geninfo_all_blocks=1 00:24:53.947 --rc geninfo_unexecuted_blocks=1 00:24:53.947 00:24:53.947 ' 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.947 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:00.519 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:00.519 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:00.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:00.520 Found net devices under 0000:86:00.0: cvl_0_0 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:00.520 Found net devices under 0000:86:00.1: cvl_0_1 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.520 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:25:00.520 00:25:00.520 --- 10.0.0.2 ping statistics --- 00:25:00.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.520 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:25:00.520 00:25:00.520 --- 10.0.0.1 ping statistics --- 00:25:00.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.520 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1801201 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1801201 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1801201 ']' 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.520 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.520 [2024-11-19 10:52:07.134173] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:00.520 [2024-11-19 10:52:07.134225] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.520 [2024-11-19 10:52:07.215818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:00.520 [2024-11-19 10:52:07.258566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.520 [2024-11-19 10:52:07.258608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.520 [2024-11-19 10:52:07.258615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.520 [2024-11-19 10:52:07.258621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.520 [2024-11-19 10:52:07.258626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.520 [2024-11-19 10:52:07.259862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.520 [2024-11-19 10:52:07.259863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1801201 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:00.521 [2024-11-19 10:52:07.564971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:00.521 Malloc0 00:25:00.521 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:00.779 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.779 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.038 [2024-11-19 10:52:08.398254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.038 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:01.297 [2024-11-19 10:52:08.582716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1801463 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1801463 /var/tmp/bdevperf.sock 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1801463 ']' 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.297 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.556 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.556 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:01.556 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:01.816 10:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:02.075 Nvme0n1 00:25:02.075 10:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:02.334 Nvme0n1 00:25:02.334 10:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:02.334 10:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:04.239 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:04.239 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:04.499 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:04.758 10:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:05.695 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:05.695 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.695 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.695 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.953 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.953 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.953 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.953 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:06.212 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.212 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:06.212 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.212 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:06.470 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.470 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:06.471 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.471 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.729 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.729 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.729 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.729 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.729 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.729 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:06.729 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.729 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.988 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.988 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:06.988 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:07.246 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:07.505 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:08.442 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:08.442 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:08.442 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.442 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:08.701 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.701 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:08.701 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.701 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.960 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.960 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.960 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.960 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.960 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.960 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.960 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.960 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.219 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.219 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:09.219 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.219 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.479 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.479 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:09.479 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.479 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.738 10:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.738 10:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:09.738 10:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:09.997 10:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:10.256 10:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:11.193 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:11.193 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:11.193 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.193 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.453 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.453 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:11.453 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.453 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:11.712 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.712 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:11.712 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.712 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.712 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.712 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.712 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.712 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.970 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.970 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:11.970 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.970 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.228 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.228 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:12.228 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.228 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.487 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.487 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:12.487 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:12.746 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:12.746 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:14.124 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:14.124 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:14.124 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.124 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:14.124 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.124 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:14.124 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.124 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:14.383 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.383 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:14.383 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.384 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:14.384 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.384 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:14.384 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.384 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:14.642 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.643 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:14.643 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.643 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.901 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.901 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:14.901 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.901 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:15.161 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.161 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:15.161 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:15.431 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:15.431 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:16.496 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:16.496 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:16.496 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.496 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.755 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.755 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:16.755 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.755 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:17.015 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.015 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:17.015 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.015 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:17.015 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.015 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:17.015 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.015 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:17.274 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.274 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:17.274 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.274 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:17.533 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.533 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:17.533 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.533 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:17.792 10:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.792 10:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:17.792 10:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:17.792 10:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:18.052 10:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:18.990 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:18.990 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:18.990 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.990 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:19.250 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.250 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:19.250 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.250 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:19.510 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.510 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:19.510 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.510 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.769 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.769 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.769 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.769 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:20.029 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.029 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:20.029 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:20.029 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.288 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.288 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:20.288 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.288 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:20.288 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.288 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:20.547 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:20.547 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:20.806 10:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:21.065 10:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:22.002 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:22.002 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:22.002 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.002 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.260 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.260 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:22.260 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.260 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.519 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.519 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.519 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.519 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.778 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.778 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.778 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.778 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.778 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.778 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:22.778 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.778 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.038 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.038 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:23.038 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.038 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.297 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.297 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:23.297 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:23.556 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:23.815 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:24.752 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:24.753 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:24.753 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.753 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.012 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.012 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:25.012 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.012 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.271 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.271 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.271 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.271 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.271 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.271 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.271 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.271 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:25.530 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.530 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:25.530 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.530 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.789 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.789 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:25.789 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.789 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.047 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.047 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:26.047 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:26.307 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:26.566 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:27.503 10:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:27.503 10:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:27.503 10:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.503 10:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:27.763 10:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.763 10:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:27.763 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.763 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.022 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.022 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.022 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.022 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:28.022 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.022 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:28.022 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.022 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:28.281 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.281 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:28.281 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.281 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:28.540 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.540 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:28.540 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.540 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.799 10:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.799 10:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:28.799 10:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:29.059 10:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:29.317 10:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:30.255 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:30.255 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:30.255 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.255 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:30.515 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.515 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:30.515 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.515 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:30.515 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.515 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.515 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.515 10:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.775 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.775 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.775 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.775 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:31.034 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.034 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:31.034 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.034 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:31.293 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.293 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:31.293 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.293 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1801463 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1801463 ']' 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1801463 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801463 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801463' 00:25:31.553 killing process with pid 1801463 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1801463 00:25:31.553 10:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1801463 00:25:31.553 { 00:25:31.553 "results": [ 00:25:31.553 { 00:25:31.553 "job": "Nvme0n1", 00:25:31.553 "core_mask": "0x4", 00:25:31.553 "workload": "verify", 00:25:31.553 "status": "terminated", 00:25:31.553 "verify_range": { 00:25:31.553 "start": 0, 00:25:31.553 "length": 16384 00:25:31.553 }, 00:25:31.553 "queue_depth": 128, 00:25:31.553 "io_size": 4096, 00:25:31.553 "runtime": 29.031003, 00:25:31.553 "iops": 10378.73200591795, 00:25:31.553 "mibps": 40.54192189811699, 00:25:31.553 "io_failed": 0, 00:25:31.553 "io_timeout": 0, 00:25:31.553 "avg_latency_us": 12311.170033311615, 00:25:31.553 "min_latency_us": 222.6086956521739, 00:25:31.553 "max_latency_us": 3078254.4139130437 00:25:31.553 } 00:25:31.553 ], 00:25:31.553 "core_count": 1 00:25:31.553 } 00:25:31.838 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1801463 00:25:31.838 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:31.838 [2024-11-19 10:52:08.642330] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:31.838 [2024-11-19 10:52:08.642383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801463 ] 00:25:31.838 [2024-11-19 10:52:08.718720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.838 [2024-11-19 10:52:08.759809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.838 Running I/O for 90 seconds... 00:25:31.838 11233.00 IOPS, 43.88 MiB/s [2024-11-19T09:52:39.287Z] 11222.50 IOPS, 43.84 MiB/s [2024-11-19T09:52:39.287Z] 11221.00 IOPS, 43.83 MiB/s [2024-11-19T09:52:39.287Z] 11223.25 IOPS, 43.84 MiB/s [2024-11-19T09:52:39.287Z] 11208.60 IOPS, 43.78 MiB/s [2024-11-19T09:52:39.287Z] 11189.50 IOPS, 43.71 MiB/s [2024-11-19T09:52:39.287Z] 11179.29 IOPS, 43.67 MiB/s [2024-11-19T09:52:39.287Z] 11167.75 IOPS, 43.62 MiB/s [2024-11-19T09:52:39.287Z] 11180.89 IOPS, 43.68 MiB/s [2024-11-19T09:52:39.287Z] 11190.90 IOPS, 43.71 MiB/s [2024-11-19T09:52:39.287Z] 11181.09 IOPS, 43.68 MiB/s [2024-11-19T09:52:39.287Z] 11175.00 IOPS, 43.65 MiB/s [2024-11-19T09:52:39.287Z] [2024-11-19 10:52:22.613285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.838 [2024-11-19 10:52:22.613326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.838 [2024-11-19 10:52:22.613862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.838 [2024-11-19 10:52:22.613869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.613881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.613888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.613900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.613920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.613927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.613940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.613953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.614844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.614852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.615604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.615617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.615633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.839 [2024-11-19 10:52:22.615640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.839 [2024-11-19 10:52:22.615652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.840 [2024-11-19 10:52:22.615660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.840 [2024-11-19 10:52:22.615679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.840 [2024-11-19 10:52:22.615837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.840 [2024-11-19 10:52:22.615855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.840 [2024-11-19 10:52:22.615875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.840 [2024-11-19 10:52:22.615895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.615980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.615994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.840 [2024-11-19 10:52:22.616655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.840 [2024-11-19 10:52:22.616662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.616986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.616998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.617005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.617024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.617044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.617063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.617082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.617101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.617121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.841 [2024-11-19 10:52:22.617141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.841 [2024-11-19 10:52:22.617405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.841 [2024-11-19 10:52:22.617411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.617588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.617596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.842 [2024-11-19 10:52:22.618475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.842 [2024-11-19 10:52:22.618487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.618494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.618506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.618513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.618527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.618534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.618546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.630972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.630982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.631192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.631218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.631246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.843 [2024-11-19 10:52:22.631273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.843 [2024-11-19 10:52:22.631445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.843 [2024-11-19 10:52:22.631454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.631975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.631984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.844 [2024-11-19 10:52:22.632475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.844 [2024-11-19 10:52:22.632485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.845 [2024-11-19 10:52:22.632511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.845 [2024-11-19 10:52:22.632539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.845 [2024-11-19 10:52:22.632565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.845 [2024-11-19 10:52:22.632591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.632978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.632988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.633004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.633014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.633033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.633042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.633059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.633068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.633085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.633095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.633111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.633121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.633138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.633147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.633165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.633176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.845 [2024-11-19 10:52:22.634372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.845 [2024-11-19 10:52:22.634396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.634982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.634992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.846 [2024-11-19 10:52:22.635747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.846 [2024-11-19 10:52:22.635773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.846 [2024-11-19 10:52:22.635799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.846 [2024-11-19 10:52:22.635825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.846 [2024-11-19 10:52:22.635855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.846 [2024-11-19 10:52:22.635881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.846 [2024-11-19 10:52:22.635911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.846 [2024-11-19 10:52:22.635928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.635937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.635960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.847 [2024-11-19 10:52:22.635970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.635987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.847 [2024-11-19 10:52:22.635996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.847 [2024-11-19 10:52:22.636022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.847 [2024-11-19 10:52:22.636048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-11-19 10:52:22.636811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.847 [2024-11-19 10:52:22.636828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.636839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.636856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.636865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.636882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.636891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-11-19 10:52:22.637844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.637872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.637898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.637924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.637955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.637975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.643891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.643911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.643921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.643936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.643945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.643965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.643974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.643989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.643998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.848 [2024-11-19 10:52:22.644210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.848 [2024-11-19 10:52:22.644219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.644991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.644999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.645015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.645024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.645039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.645048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.645064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.645072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.645090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.645099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.645115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.645123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.645139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.645148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.849 [2024-11-19 10:52:22.645163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.849 [2024-11-19 10:52:22.645172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.645187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.645196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.645212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.645220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.645236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.645245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.645261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.645269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.850 [2024-11-19 10:52:22.646545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.850 [2024-11-19 10:52:22.646970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.850 [2024-11-19 10:52:22.646985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.646994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.647984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.647992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.851 [2024-11-19 10:52:22.648212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.851 [2024-11-19 10:52:22.648236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.851 [2024-11-19 10:52:22.648252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.648989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.648997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.852 [2024-11-19 10:52:22.649422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.852 [2024-11-19 10:52:22.649431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.649986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.649994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.650018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.650043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.650067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.650091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.853 [2024-11-19 10:52:22.650115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.853 [2024-11-19 10:52:22.650139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.853 [2024-11-19 10:52:22.650163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.853 [2024-11-19 10:52:22.650187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.853 [2024-11-19 10:52:22.650213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.853 [2024-11-19 10:52:22.650237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.650253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.853 [2024-11-19 10:52:22.650262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.651003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.651021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.651039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.651047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.651062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.651071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.651087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.853 [2024-11-19 10:52:22.651095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.853 [2024-11-19 10:52:22.651110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.853 [2024-11-19 10:52:22.651118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.651990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.651998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.652013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.854 [2024-11-19 10:52:22.652021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.854 [2024-11-19 10:52:22.652037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.855 [2024-11-19 10:52:22.652279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.652303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.652326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.652350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.652373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.652397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.652422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.652974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.652990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.652999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.855 [2024-11-19 10:52:22.653444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.855 [2024-11-19 10:52:22.653459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.653976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.653984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.654000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.654008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.654023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.654031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.654046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.654055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.654070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.654078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.654095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.654103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.856 [2024-11-19 10:52:22.654118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.856 [2024-11-19 10:52:22.654127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.654383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.654407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.654430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.654453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.654476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.654499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.654515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.654523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.655271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.655294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.655318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.857 [2024-11-19 10:52:22.655341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.857 [2024-11-19 10:52:22.655649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.857 [2024-11-19 10:52:22.655664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.655976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.655985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.858 [2024-11-19 10:52:22.656528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.858 [2024-11-19 10:52:22.656552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.858 [2024-11-19 10:52:22.656567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.858 [2024-11-19 10:52:22.656575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.656591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.656599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.656614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.656622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.656637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.656646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.657977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.657992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.658000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.658015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.658024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.859 [2024-11-19 10:52:22.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.859 [2024-11-19 10:52:22.658047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.860 [2024-11-19 10:52:22.658656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.860 [2024-11-19 10:52:22.658679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.860 [2024-11-19 10:52:22.658694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.860 [2024-11-19 10:52:22.658702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.658717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.658725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.658740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.658749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.658764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.658772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.861 [2024-11-19 10:52:22.659566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.861 [2024-11-19 10:52:22.659590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.861 [2024-11-19 10:52:22.659613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.861 [2024-11-19 10:52:22.659636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.659985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.659994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.660009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.660017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.660032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.660040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.660055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.660063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.660078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.660087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.660104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.861 [2024-11-19 10:52:22.660113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.861 [2024-11-19 10:52:22.660128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.862 [2024-11-19 10:52:22.660813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.660836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.660859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.660882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.660897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.660907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.661448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.661459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.661473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.661480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.661494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.661501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.661514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.661521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.661533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.862 [2024-11-19 10:52:22.661540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.862 [2024-11-19 10:52:22.661552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.661983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.661989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.863 [2024-11-19 10:52:22.662290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.863 [2024-11-19 10:52:22.662297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.662666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.662687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.662707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.662727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.662739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.662746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.663396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.663415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.663434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.864 [2024-11-19 10:52:22.663453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.864 [2024-11-19 10:52:22.663623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.864 [2024-11-19 10:52:22.663630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.663993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.663999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.865 [2024-11-19 10:52:22.664280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.865 [2024-11-19 10:52:22.664287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.664299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-11-19 10:52:22.664306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.664318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-11-19 10:52:22.664325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.664337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-11-19 10:52:22.664344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.664356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-11-19 10:52:22.664363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.664375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-11-19 10:52:22.664382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.664394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-11-19 10:52:22.664401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.664413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-11-19 10:52:22.664420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.664437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.668987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.668994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.669006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.669013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.669025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.669032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.669045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.669051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.669064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.669071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.866 [2024-11-19 10:52:22.669083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.866 [2024-11-19 10:52:22.669090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.867 [2024-11-19 10:52:22.669779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-11-19 10:52:22.669799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-11-19 10:52:22.669818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.867 [2024-11-19 10:52:22.669830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-11-19 10:52:22.669837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.868 [2024-11-19 10:52:22.670509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.868 [2024-11-19 10:52:22.670528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.868 [2024-11-19 10:52:22.670547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.868 [2024-11-19 10:52:22.670566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.670985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.670997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.868 [2024-11-19 10:52:22.671159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.868 [2024-11-19 10:52:22.671172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.869 [2024-11-19 10:52:22.671548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.671568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.671580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.671587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.869 [2024-11-19 10:52:22.672425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.869 [2024-11-19 10:52:22.672432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.672755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.672762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.870 [2024-11-19 10:52:22.673427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.870 [2024-11-19 10:52:22.673434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.673946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.673973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.673988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.673995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.674110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.674129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.674148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.871 [2024-11-19 10:52:22.674168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.871 [2024-11-19 10:52:22.674431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.871 [2024-11-19 10:52:22.674438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.674984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.674997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.675004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.675023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.675042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.675061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.675080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.675100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.675118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.872 [2024-11-19 10:52:22.675138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.872 [2024-11-19 10:52:22.675158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.872 [2024-11-19 10:52:22.675718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.872 [2024-11-19 10:52:22.675731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.675992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.675999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.873 [2024-11-19 10:52:22.676791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.873 [2024-11-19 10:52:22.676798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.676991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.676998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.874 [2024-11-19 10:52:22.677597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.874 [2024-11-19 10:52:22.677617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.874 [2024-11-19 10:52:22.677636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.874 [2024-11-19 10:52:22.677655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.874 [2024-11-19 10:52:22.677674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.874 [2024-11-19 10:52:22.677693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.874 [2024-11-19 10:52:22.677712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.874 [2024-11-19 10:52:22.677733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.874 [2024-11-19 10:52:22.677746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.875 [2024-11-19 10:52:22.677753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.875 [2024-11-19 10:52:22.677772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.875 [2024-11-19 10:52:22.677790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.677983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.677990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.875 [2024-11-19 10:52:22.678490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.875 [2024-11-19 10:52:22.678497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.678763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.876 [2024-11-19 10:52:22.678770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.876 [2024-11-19 10:52:22.679762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.876 [2024-11-19 10:52:22.679777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.679989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.679996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.877 [2024-11-19 10:52:22.680647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.877 [2024-11-19 10:52:22.680664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:22.680671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:22.680688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:22.680695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:22.680712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:22.680719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:22.680737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:22.680744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:22.680761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:22.680768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:22.680785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:22.680792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:22.680811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:22.680818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.878 11026.15 IOPS, 43.07 MiB/s [2024-11-19T09:52:39.327Z] 10238.57 IOPS, 39.99 MiB/s [2024-11-19T09:52:39.327Z] 9556.00 IOPS, 37.33 MiB/s [2024-11-19T09:52:39.327Z] 9036.38 IOPS, 35.30 MiB/s [2024-11-19T09:52:39.327Z] 9168.94 IOPS, 35.82 MiB/s [2024-11-19T09:52:39.327Z] 9277.39 IOPS, 36.24 MiB/s [2024-11-19T09:52:39.327Z] 9448.95 IOPS, 36.91 MiB/s [2024-11-19T09:52:39.327Z] 9639.05 IOPS, 37.65 MiB/s [2024-11-19T09:52:39.327Z] 9815.14 IOPS, 38.34 MiB/s [2024-11-19T09:52:39.327Z] 9881.00 IOPS, 38.60 MiB/s [2024-11-19T09:52:39.327Z] 9936.13 IOPS, 38.81 MiB/s [2024-11-19T09:52:39.327Z] 9977.79 IOPS, 38.98 MiB/s [2024-11-19T09:52:39.327Z] 10117.04 IOPS, 39.52 MiB/s [2024-11-19T09:52:39.327Z] 10239.65 IOPS, 40.00 MiB/s [2024-11-19T09:52:39.327Z] [2024-11-19 10:52:36.489506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.878 [2024-11-19 10:52:36.489830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.878 [2024-11-19 10:52:36.489842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.489849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.489864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.489871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.489884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.489891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.489904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.489912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.489925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.489932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.489945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.489957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.489969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.489976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.489989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.489996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.490008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.879 [2024-11-19 10:52:36.490015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.490027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.490034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.490047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.490054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.490066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.490073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.490086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.490093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.490107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.490113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.490126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.490133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.490145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.490152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.491041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.491060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.491076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.491084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.491096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.491103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.491117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.491124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.879 [2024-11-19 10:52:36.491137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.879 [2024-11-19 10:52:36.491144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.880 [2024-11-19 10:52:36.491156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.880 [2024-11-19 10:52:36.491163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.880 [2024-11-19 10:52:36.491176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.880 [2024-11-19 10:52:36.491183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.880 [2024-11-19 10:52:36.491195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.880 [2024-11-19 10:52:36.491203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.881 [2024-11-19 10:52:36.491440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.881 [2024-11-19 10:52:36.491459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.881 [2024-11-19 10:52:36.491898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.491985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.491997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.492004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.492017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.492024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.492037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.881 [2024-11-19 10:52:36.492044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.492056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.881 [2024-11-19 10:52:36.492063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.492075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.881 [2024-11-19 10:52:36.492082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.492095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.881 [2024-11-19 10:52:36.492101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.492114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.881 [2024-11-19 10:52:36.492121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.881 [2024-11-19 10:52:36.492133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.881 [2024-11-19 10:52:36.492142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.882 [2024-11-19 10:52:36.492161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.882 [2024-11-19 10:52:36.492182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.882 [2024-11-19 10:52:36.492203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.882 [2024-11-19 10:52:36.492222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.882 [2024-11-19 10:52:36.492241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.882 [2024-11-19 10:52:36.492261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.882 [2024-11-19 10:52:36.492280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.882 [2024-11-19 10:52:36.492299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.882 [2024-11-19 10:52:36.492319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.882 [2024-11-19 10:52:36.492338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.882 [2024-11-19 10:52:36.492351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.882 [2024-11-19 10:52:36.492357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.882 10325.41 IOPS, 40.33 MiB/s [2024-11-19T09:52:39.331Z] 10360.00 IOPS, 40.47 MiB/s [2024-11-19T09:52:39.331Z] 10381.00 IOPS, 40.55 MiB/s [2024-11-19T09:52:39.331Z] Received shutdown signal, test time was about 29.031663 seconds 00:25:31.882 00:25:31.882 Latency(us) 00:25:31.882 [2024-11-19T09:52:39.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.882 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:31.882 Verification LBA range: start 0x0 length 0x4000 00:25:31.882 Nvme0n1 : 29.03 10378.73 40.54 0.00 0.00 12311.17 222.61 3078254.41 00:25:31.882 [2024-11-19T09:52:39.331Z] =================================================================================================================== 00:25:31.882 [2024-11-19T09:52:39.331Z] Total : 10378.73 40.54 0.00 0.00 12311.17 222.61 3078254.41 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.882 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.882 rmmod nvme_tcp 00:25:31.882 rmmod nvme_fabrics 00:25:32.145 rmmod nvme_keyring 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1801201 ']' 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1801201 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1801201 ']' 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1801201 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801201 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801201' 00:25:32.145 killing process with pid 1801201 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1801201 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1801201 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:32.145 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:32.146 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:32.146 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:32.146 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:32.146 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.146 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.146 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.146 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.146 10:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:34.684 00:25:34.684 real 0m40.649s 00:25:34.684 user 1m50.348s 00:25:34.684 sys 0m11.594s 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:34.684 ************************************ 00:25:34.684 END TEST nvmf_host_multipath_status 00:25:34.684 ************************************ 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.684 ************************************ 00:25:34.684 START TEST nvmf_discovery_remove_ifc 00:25:34.684 ************************************ 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:34.684 * Looking for test storage... 00:25:34.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:34.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.684 --rc genhtml_branch_coverage=1 00:25:34.684 --rc genhtml_function_coverage=1 00:25:34.684 --rc genhtml_legend=1 00:25:34.684 --rc geninfo_all_blocks=1 00:25:34.684 --rc geninfo_unexecuted_blocks=1 00:25:34.684 00:25:34.684 ' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:34.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.684 --rc genhtml_branch_coverage=1 00:25:34.684 --rc genhtml_function_coverage=1 00:25:34.684 --rc genhtml_legend=1 00:25:34.684 --rc geninfo_all_blocks=1 00:25:34.684 --rc geninfo_unexecuted_blocks=1 00:25:34.684 00:25:34.684 ' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:34.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.684 --rc genhtml_branch_coverage=1 00:25:34.684 --rc genhtml_function_coverage=1 00:25:34.684 --rc genhtml_legend=1 00:25:34.684 --rc geninfo_all_blocks=1 00:25:34.684 --rc geninfo_unexecuted_blocks=1 00:25:34.684 00:25:34.684 ' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:34.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.684 --rc genhtml_branch_coverage=1 00:25:34.684 --rc genhtml_function_coverage=1 00:25:34.684 --rc genhtml_legend=1 00:25:34.684 --rc geninfo_all_blocks=1 00:25:34.684 --rc geninfo_unexecuted_blocks=1 00:25:34.684 00:25:34.684 ' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.684 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.685 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:41.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:41.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:41.260 Found net devices under 0000:86:00.0: cvl_0_0 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:41.260 Found net devices under 0000:86:00.1: cvl_0_1 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:41.260 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:41.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:25:41.261 00:25:41.261 --- 10.0.0.2 ping statistics --- 00:25:41.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.261 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:25:41.261 00:25:41.261 --- 10.0.0.1 ping statistics --- 00:25:41.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.261 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1810005 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1810005 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1810005 ']' 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.261 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.261 [2024-11-19 10:52:47.836279] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:41.261 [2024-11-19 10:52:47.836329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.261 [2024-11-19 10:52:47.915321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.261 [2024-11-19 10:52:47.956375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.261 [2024-11-19 10:52:47.956408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.261 [2024-11-19 10:52:47.956415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.261 [2024-11-19 10:52:47.956421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.261 [2024-11-19 10:52:47.956426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.261 [2024-11-19 10:52:47.957015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.261 [2024-11-19 10:52:48.100445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.261 [2024-11-19 10:52:48.108631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:41.261 null0 00:25:41.261 [2024-11-19 10:52:48.140623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1810050 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1810050 /tmp/host.sock 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1810050 ']' 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:41.261 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.261 [2024-11-19 10:52:48.207755] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:25:41.261 [2024-11-19 10:52:48.207801] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810050 ] 00:25:41.261 [2024-11-19 10:52:48.281131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.261 [2024-11-19 10:52:48.324372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.261 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.262 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.262 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:41.262 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.262 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.262 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.262 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:41.262 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.262 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.203 [2024-11-19 10:52:49.507098] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:42.203 [2024-11-19 10:52:49.507117] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:42.203 [2024-11-19 10:52:49.507134] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.203 [2024-11-19 10:52:49.593407] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:42.463 [2024-11-19 10:52:49.688222] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:42.463 [2024-11-19 10:52:49.689019] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x19809f0:1 started. 00:25:42.463 [2024-11-19 10:52:49.690378] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:42.463 [2024-11-19 10:52:49.690416] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:42.463 [2024-11-19 10:52:49.690435] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:42.463 [2024-11-19 10:52:49.690446] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:42.463 [2024-11-19 10:52:49.690463] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.463 [2024-11-19 10:52:49.695987] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x19809f0 was disconnected and freed. delete nvme_qpair. 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:42.463 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:43.841 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.841 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.841 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.841 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.842 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.842 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.842 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.842 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.842 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:43.842 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.779 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.714 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.714 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.714 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.714 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.714 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.714 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.714 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.714 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.714 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:45.714 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:46.652 10:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.026 [2024-11-19 10:52:55.131984] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:48.026 [2024-11-19 10:52:55.132027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.026 [2024-11-19 10:52:55.132054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.026 [2024-11-19 10:52:55.132064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.026 [2024-11-19 10:52:55.132071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.026 [2024-11-19 10:52:55.132078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.026 [2024-11-19 10:52:55.132085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.026 [2024-11-19 10:52:55.132093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.026 [2024-11-19 10:52:55.132099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.026 [2024-11-19 10:52:55.132107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.026 [2024-11-19 10:52:55.132113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.026 [2024-11-19 10:52:55.132120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195d220 is same with the state(6) to be set 00:25:48.026 [2024-11-19 10:52:55.142007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195d220 (9): Bad file descriptor 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:48.026 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:48.026 [2024-11-19 10:52:55.152040] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:48.026 [2024-11-19 10:52:55.152053] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:48.026 [2024-11-19 10:52:55.152057] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:48.026 [2024-11-19 10:52:55.152062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:48.026 [2024-11-19 10:52:55.152083] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.959 [2024-11-19 10:52:56.175992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:48.959 [2024-11-19 10:52:56.176059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195d220 with addr=10.0.0.2, port=4420 00:25:48.959 [2024-11-19 10:52:56.176091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195d220 is same with the state(6) to be set 00:25:48.959 [2024-11-19 10:52:56.176140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195d220 (9): Bad file descriptor 00:25:48.959 [2024-11-19 10:52:56.177092] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:48.959 [2024-11-19 10:52:56.177155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:48.959 [2024-11-19 10:52:56.177179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:48.959 [2024-11-19 10:52:56.177202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:48.959 [2024-11-19 10:52:56.177221] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:48.959 [2024-11-19 10:52:56.177237] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:48.959 [2024-11-19 10:52:56.177250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:48.959 [2024-11-19 10:52:56.177270] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:48.959 [2024-11-19 10:52:56.177284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:48.959 10:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:49.892 [2024-11-19 10:52:57.179806] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:49.892 [2024-11-19 10:52:57.179825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:49.892 [2024-11-19 10:52:57.179836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:49.892 [2024-11-19 10:52:57.179842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:49.892 [2024-11-19 10:52:57.179848] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:49.893 [2024-11-19 10:52:57.179855] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:49.893 [2024-11-19 10:52:57.179859] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:49.893 [2024-11-19 10:52:57.179863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:49.893 [2024-11-19 10:52:57.179882] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:49.893 [2024-11-19 10:52:57.179903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.893 [2024-11-19 10:52:57.179912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.893 [2024-11-19 10:52:57.179920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.893 [2024-11-19 10:52:57.179927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.893 [2024-11-19 10:52:57.179937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.893 [2024-11-19 10:52:57.179943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.893 [2024-11-19 10:52:57.179955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.893 [2024-11-19 10:52:57.179965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.893 [2024-11-19 10:52:57.179972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.893 [2024-11-19 10:52:57.179978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.893 [2024-11-19 10:52:57.179985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:49.893 [2024-11-19 10:52:57.180503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194c900 (9): Bad file descriptor 00:25:49.893 [2024-11-19 10:52:57.181513] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:49.893 [2024-11-19 10:52:57.181524] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.893 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.151 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:50.151 10:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:51.089 10:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.025 [2024-11-19 10:52:59.191521] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:52.025 [2024-11-19 10:52:59.191537] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:52.025 [2024-11-19 10:52:59.191550] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:52.025 [2024-11-19 10:52:59.277821] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:52.025 [2024-11-19 10:52:59.372571] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:52.025 [2024-11-19 10:52:59.373151] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1951760:1 started. 00:25:52.025 [2024-11-19 10:52:59.374173] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:52.025 [2024-11-19 10:52:59.374204] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:52.025 [2024-11-19 10:52:59.374221] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:52.025 [2024-11-19 10:52:59.374234] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:52.025 [2024-11-19 10:52:59.374240] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:52.025 [2024-11-19 10:52:59.380746] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1951760 was disconnected and freed. delete nvme_qpair. 00:25:52.025 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.025 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.025 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.025 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.025 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.025 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.025 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.025 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1810050 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1810050 ']' 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1810050 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810050 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810050' 00:25:52.284 killing process with pid 1810050 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1810050 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1810050 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.284 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.284 rmmod nvme_tcp 00:25:52.284 rmmod nvme_fabrics 00:25:52.284 rmmod nvme_keyring 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1810005 ']' 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1810005 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1810005 ']' 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1810005 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810005 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810005' 00:25:52.543 killing process with pid 1810005 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1810005 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1810005 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:52.543 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:52.544 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:52.544 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.544 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.544 10:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:55.083 00:25:55.083 real 0m20.343s 00:25:55.083 user 0m24.492s 00:25:55.083 sys 0m5.871s 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.083 ************************************ 00:25:55.083 END TEST nvmf_discovery_remove_ifc 00:25:55.083 ************************************ 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.083 ************************************ 00:25:55.083 START TEST nvmf_identify_kernel_target 00:25:55.083 ************************************ 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:55.083 * Looking for test storage... 00:25:55.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:55.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.083 --rc genhtml_branch_coverage=1 00:25:55.083 --rc genhtml_function_coverage=1 00:25:55.083 --rc genhtml_legend=1 00:25:55.083 --rc geninfo_all_blocks=1 00:25:55.083 --rc geninfo_unexecuted_blocks=1 00:25:55.083 00:25:55.083 ' 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:55.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.083 --rc genhtml_branch_coverage=1 00:25:55.083 --rc genhtml_function_coverage=1 00:25:55.083 --rc genhtml_legend=1 00:25:55.083 --rc geninfo_all_blocks=1 00:25:55.083 --rc geninfo_unexecuted_blocks=1 00:25:55.083 00:25:55.083 ' 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:55.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.083 --rc genhtml_branch_coverage=1 00:25:55.083 --rc genhtml_function_coverage=1 00:25:55.083 --rc genhtml_legend=1 00:25:55.083 --rc geninfo_all_blocks=1 00:25:55.083 --rc geninfo_unexecuted_blocks=1 00:25:55.083 00:25:55.083 ' 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:55.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.083 --rc genhtml_branch_coverage=1 00:25:55.083 --rc genhtml_function_coverage=1 00:25:55.083 --rc genhtml_legend=1 00:25:55.083 --rc geninfo_all_blocks=1 00:25:55.083 --rc geninfo_unexecuted_blocks=1 00:25:55.083 00:25:55.083 ' 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.083 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.084 10:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:01.659 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:01.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:01.659 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:01.660 Found net devices under 0000:86:00.0: cvl_0_0 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:01.660 Found net devices under 0000:86:00.1: cvl_0_1 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.660 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:26:01.660 00:26:01.660 --- 10.0.0.2 ping statistics --- 00:26:01.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.660 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:26:01.660 00:26:01.660 --- 10.0.0.1 ping statistics --- 00:26:01.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.660 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:01.660 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:01.661 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:01.661 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:01.661 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:01.661 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:01.661 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:01.661 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:01.661 10:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:03.565 Waiting for block devices as requested 00:26:03.824 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:03.824 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:03.824 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:04.083 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:04.083 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:04.083 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:04.342 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:04.342 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:04.342 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:04.342 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:04.601 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:04.601 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:04.601 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:04.860 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:04.860 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:04.860 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:05.120 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:05.120 No valid GPT data, bailing 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:05.120 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:05.381 00:26:05.381 Discovery Log Number of Records 2, Generation counter 2 00:26:05.381 =====Discovery Log Entry 0====== 00:26:05.381 trtype: tcp 00:26:05.381 adrfam: ipv4 00:26:05.381 subtype: current discovery subsystem 00:26:05.381 treq: not specified, sq flow control disable supported 00:26:05.381 portid: 1 00:26:05.381 trsvcid: 4420 00:26:05.381 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:05.381 traddr: 10.0.0.1 00:26:05.381 eflags: none 00:26:05.381 sectype: none 00:26:05.381 =====Discovery Log Entry 1====== 00:26:05.381 trtype: tcp 00:26:05.381 adrfam: ipv4 00:26:05.381 subtype: nvme subsystem 00:26:05.381 treq: not specified, sq flow control disable supported 00:26:05.381 portid: 1 00:26:05.381 trsvcid: 4420 00:26:05.381 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:05.381 traddr: 10.0.0.1 00:26:05.381 eflags: none 00:26:05.381 sectype: none 00:26:05.381 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:05.381 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:05.381 ===================================================== 00:26:05.381 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:05.381 ===================================================== 00:26:05.381 Controller Capabilities/Features 00:26:05.381 ================================ 00:26:05.381 Vendor ID: 0000 00:26:05.381 Subsystem Vendor ID: 0000 00:26:05.381 Serial Number: 4455caa70774ebc55823 00:26:05.381 Model Number: Linux 00:26:05.381 Firmware Version: 6.8.9-20 00:26:05.381 Recommended Arb Burst: 0 00:26:05.381 IEEE OUI Identifier: 00 00 00 00:26:05.381 Multi-path I/O 00:26:05.381 May have multiple subsystem ports: No 00:26:05.381 May have multiple controllers: No 00:26:05.381 Associated with SR-IOV VF: No 00:26:05.381 Max Data Transfer Size: Unlimited 00:26:05.381 Max Number of Namespaces: 0 00:26:05.381 Max Number of I/O Queues: 1024 00:26:05.381 NVMe Specification Version (VS): 1.3 00:26:05.381 NVMe Specification Version (Identify): 1.3 00:26:05.381 Maximum Queue Entries: 1024 00:26:05.381 Contiguous Queues Required: No 00:26:05.381 Arbitration Mechanisms Supported 00:26:05.381 Weighted Round Robin: Not Supported 00:26:05.381 Vendor Specific: Not Supported 00:26:05.381 Reset Timeout: 7500 ms 00:26:05.381 Doorbell Stride: 4 bytes 00:26:05.381 NVM Subsystem Reset: Not Supported 00:26:05.381 Command Sets Supported 00:26:05.381 NVM Command Set: Supported 00:26:05.381 Boot Partition: Not Supported 00:26:05.381 Memory Page Size Minimum: 4096 bytes 00:26:05.381 Memory Page Size Maximum: 4096 bytes 00:26:05.381 Persistent Memory Region: Not Supported 00:26:05.381 Optional Asynchronous Events Supported 00:26:05.381 Namespace Attribute Notices: Not Supported 00:26:05.381 Firmware Activation Notices: Not Supported 00:26:05.381 ANA Change Notices: Not Supported 00:26:05.381 PLE Aggregate Log Change Notices: Not Supported 00:26:05.381 LBA Status Info Alert Notices: Not Supported 00:26:05.381 EGE Aggregate Log Change Notices: Not Supported 00:26:05.381 Normal NVM Subsystem Shutdown event: Not Supported 00:26:05.381 Zone Descriptor Change Notices: Not Supported 00:26:05.381 Discovery Log Change Notices: Supported 00:26:05.381 Controller Attributes 00:26:05.381 128-bit Host Identifier: Not Supported 00:26:05.381 Non-Operational Permissive Mode: Not Supported 00:26:05.381 NVM Sets: Not Supported 00:26:05.381 Read Recovery Levels: Not Supported 00:26:05.381 Endurance Groups: Not Supported 00:26:05.381 Predictable Latency Mode: Not Supported 00:26:05.381 Traffic Based Keep ALive: Not Supported 00:26:05.381 Namespace Granularity: Not Supported 00:26:05.381 SQ Associations: Not Supported 00:26:05.381 UUID List: Not Supported 00:26:05.381 Multi-Domain Subsystem: Not Supported 00:26:05.381 Fixed Capacity Management: Not Supported 00:26:05.381 Variable Capacity Management: Not Supported 00:26:05.381 Delete Endurance Group: Not Supported 00:26:05.381 Delete NVM Set: Not Supported 00:26:05.381 Extended LBA Formats Supported: Not Supported 00:26:05.381 Flexible Data Placement Supported: Not Supported 00:26:05.381 00:26:05.381 Controller Memory Buffer Support 00:26:05.381 ================================ 00:26:05.381 Supported: No 00:26:05.381 00:26:05.381 Persistent Memory Region Support 00:26:05.381 ================================ 00:26:05.381 Supported: No 00:26:05.381 00:26:05.381 Admin Command Set Attributes 00:26:05.381 ============================ 00:26:05.381 Security Send/Receive: Not Supported 00:26:05.381 Format NVM: Not Supported 00:26:05.381 Firmware Activate/Download: Not Supported 00:26:05.381 Namespace Management: Not Supported 00:26:05.382 Device Self-Test: Not Supported 00:26:05.382 Directives: Not Supported 00:26:05.382 NVMe-MI: Not Supported 00:26:05.382 Virtualization Management: Not Supported 00:26:05.382 Doorbell Buffer Config: Not Supported 00:26:05.382 Get LBA Status Capability: Not Supported 00:26:05.382 Command & Feature Lockdown Capability: Not Supported 00:26:05.382 Abort Command Limit: 1 00:26:05.382 Async Event Request Limit: 1 00:26:05.382 Number of Firmware Slots: N/A 00:26:05.382 Firmware Slot 1 Read-Only: N/A 00:26:05.382 Firmware Activation Without Reset: N/A 00:26:05.382 Multiple Update Detection Support: N/A 00:26:05.382 Firmware Update Granularity: No Information Provided 00:26:05.382 Per-Namespace SMART Log: No 00:26:05.382 Asymmetric Namespace Access Log Page: Not Supported 00:26:05.382 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:05.382 Command Effects Log Page: Not Supported 00:26:05.382 Get Log Page Extended Data: Supported 00:26:05.382 Telemetry Log Pages: Not Supported 00:26:05.382 Persistent Event Log Pages: Not Supported 00:26:05.382 Supported Log Pages Log Page: May Support 00:26:05.382 Commands Supported & Effects Log Page: Not Supported 00:26:05.382 Feature Identifiers & Effects Log Page:May Support 00:26:05.382 NVMe-MI Commands & Effects Log Page: May Support 00:26:05.382 Data Area 4 for Telemetry Log: Not Supported 00:26:05.382 Error Log Page Entries Supported: 1 00:26:05.382 Keep Alive: Not Supported 00:26:05.382 00:26:05.382 NVM Command Set Attributes 00:26:05.382 ========================== 00:26:05.382 Submission Queue Entry Size 00:26:05.382 Max: 1 00:26:05.382 Min: 1 00:26:05.382 Completion Queue Entry Size 00:26:05.382 Max: 1 00:26:05.382 Min: 1 00:26:05.382 Number of Namespaces: 0 00:26:05.382 Compare Command: Not Supported 00:26:05.382 Write Uncorrectable Command: Not Supported 00:26:05.382 Dataset Management Command: Not Supported 00:26:05.382 Write Zeroes Command: Not Supported 00:26:05.382 Set Features Save Field: Not Supported 00:26:05.382 Reservations: Not Supported 00:26:05.382 Timestamp: Not Supported 00:26:05.382 Copy: Not Supported 00:26:05.382 Volatile Write Cache: Not Present 00:26:05.382 Atomic Write Unit (Normal): 1 00:26:05.382 Atomic Write Unit (PFail): 1 00:26:05.382 Atomic Compare & Write Unit: 1 00:26:05.382 Fused Compare & Write: Not Supported 00:26:05.382 Scatter-Gather List 00:26:05.382 SGL Command Set: Supported 00:26:05.382 SGL Keyed: Not Supported 00:26:05.382 SGL Bit Bucket Descriptor: Not Supported 00:26:05.382 SGL Metadata Pointer: Not Supported 00:26:05.382 Oversized SGL: Not Supported 00:26:05.382 SGL Metadata Address: Not Supported 00:26:05.382 SGL Offset: Supported 00:26:05.382 Transport SGL Data Block: Not Supported 00:26:05.382 Replay Protected Memory Block: Not Supported 00:26:05.382 00:26:05.382 Firmware Slot Information 00:26:05.382 ========================= 00:26:05.382 Active slot: 0 00:26:05.382 00:26:05.382 00:26:05.382 Error Log 00:26:05.382 ========= 00:26:05.382 00:26:05.382 Active Namespaces 00:26:05.382 ================= 00:26:05.382 Discovery Log Page 00:26:05.382 ================== 00:26:05.382 Generation Counter: 2 00:26:05.382 Number of Records: 2 00:26:05.382 Record Format: 0 00:26:05.382 00:26:05.382 Discovery Log Entry 0 00:26:05.382 ---------------------- 00:26:05.382 Transport Type: 3 (TCP) 00:26:05.382 Address Family: 1 (IPv4) 00:26:05.382 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:05.382 Entry Flags: 00:26:05.382 Duplicate Returned Information: 0 00:26:05.382 Explicit Persistent Connection Support for Discovery: 0 00:26:05.382 Transport Requirements: 00:26:05.382 Secure Channel: Not Specified 00:26:05.382 Port ID: 1 (0x0001) 00:26:05.382 Controller ID: 65535 (0xffff) 00:26:05.382 Admin Max SQ Size: 32 00:26:05.382 Transport Service Identifier: 4420 00:26:05.382 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:05.382 Transport Address: 10.0.0.1 00:26:05.382 Discovery Log Entry 1 00:26:05.382 ---------------------- 00:26:05.382 Transport Type: 3 (TCP) 00:26:05.382 Address Family: 1 (IPv4) 00:26:05.382 Subsystem Type: 2 (NVM Subsystem) 00:26:05.382 Entry Flags: 00:26:05.382 Duplicate Returned Information: 0 00:26:05.382 Explicit Persistent Connection Support for Discovery: 0 00:26:05.382 Transport Requirements: 00:26:05.382 Secure Channel: Not Specified 00:26:05.382 Port ID: 1 (0x0001) 00:26:05.382 Controller ID: 65535 (0xffff) 00:26:05.382 Admin Max SQ Size: 32 00:26:05.382 Transport Service Identifier: 4420 00:26:05.382 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:05.382 Transport Address: 10.0.0.1 00:26:05.382 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:05.382 get_feature(0x01) failed 00:26:05.382 get_feature(0x02) failed 00:26:05.382 get_feature(0x04) failed 00:26:05.382 ===================================================== 00:26:05.382 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:05.382 ===================================================== 00:26:05.382 Controller Capabilities/Features 00:26:05.382 ================================ 00:26:05.382 Vendor ID: 0000 00:26:05.382 Subsystem Vendor ID: 0000 00:26:05.382 Serial Number: 620360e3d96234bedcdd 00:26:05.382 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:05.382 Firmware Version: 6.8.9-20 00:26:05.382 Recommended Arb Burst: 6 00:26:05.382 IEEE OUI Identifier: 00 00 00 00:26:05.382 Multi-path I/O 00:26:05.382 May have multiple subsystem ports: Yes 00:26:05.382 May have multiple controllers: Yes 00:26:05.382 Associated with SR-IOV VF: No 00:26:05.382 Max Data Transfer Size: Unlimited 00:26:05.382 Max Number of Namespaces: 1024 00:26:05.382 Max Number of I/O Queues: 128 00:26:05.382 NVMe Specification Version (VS): 1.3 00:26:05.382 NVMe Specification Version (Identify): 1.3 00:26:05.382 Maximum Queue Entries: 1024 00:26:05.382 Contiguous Queues Required: No 00:26:05.382 Arbitration Mechanisms Supported 00:26:05.382 Weighted Round Robin: Not Supported 00:26:05.382 Vendor Specific: Not Supported 00:26:05.382 Reset Timeout: 7500 ms 00:26:05.382 Doorbell Stride: 4 bytes 00:26:05.382 NVM Subsystem Reset: Not Supported 00:26:05.382 Command Sets Supported 00:26:05.382 NVM Command Set: Supported 00:26:05.382 Boot Partition: Not Supported 00:26:05.382 Memory Page Size Minimum: 4096 bytes 00:26:05.382 Memory Page Size Maximum: 4096 bytes 00:26:05.382 Persistent Memory Region: Not Supported 00:26:05.382 Optional Asynchronous Events Supported 00:26:05.382 Namespace Attribute Notices: Supported 00:26:05.382 Firmware Activation Notices: Not Supported 00:26:05.382 ANA Change Notices: Supported 00:26:05.382 PLE Aggregate Log Change Notices: Not Supported 00:26:05.382 LBA Status Info Alert Notices: Not Supported 00:26:05.382 EGE Aggregate Log Change Notices: Not Supported 00:26:05.382 Normal NVM Subsystem Shutdown event: Not Supported 00:26:05.382 Zone Descriptor Change Notices: Not Supported 00:26:05.382 Discovery Log Change Notices: Not Supported 00:26:05.382 Controller Attributes 00:26:05.383 128-bit Host Identifier: Supported 00:26:05.383 Non-Operational Permissive Mode: Not Supported 00:26:05.383 NVM Sets: Not Supported 00:26:05.383 Read Recovery Levels: Not Supported 00:26:05.383 Endurance Groups: Not Supported 00:26:05.383 Predictable Latency Mode: Not Supported 00:26:05.383 Traffic Based Keep ALive: Supported 00:26:05.383 Namespace Granularity: Not Supported 00:26:05.383 SQ Associations: Not Supported 00:26:05.383 UUID List: Not Supported 00:26:05.383 Multi-Domain Subsystem: Not Supported 00:26:05.383 Fixed Capacity Management: Not Supported 00:26:05.383 Variable Capacity Management: Not Supported 00:26:05.383 Delete Endurance Group: Not Supported 00:26:05.383 Delete NVM Set: Not Supported 00:26:05.383 Extended LBA Formats Supported: Not Supported 00:26:05.383 Flexible Data Placement Supported: Not Supported 00:26:05.383 00:26:05.383 Controller Memory Buffer Support 00:26:05.383 ================================ 00:26:05.383 Supported: No 00:26:05.383 00:26:05.383 Persistent Memory Region Support 00:26:05.383 ================================ 00:26:05.383 Supported: No 00:26:05.383 00:26:05.383 Admin Command Set Attributes 00:26:05.383 ============================ 00:26:05.383 Security Send/Receive: Not Supported 00:26:05.383 Format NVM: Not Supported 00:26:05.383 Firmware Activate/Download: Not Supported 00:26:05.383 Namespace Management: Not Supported 00:26:05.383 Device Self-Test: Not Supported 00:26:05.383 Directives: Not Supported 00:26:05.383 NVMe-MI: Not Supported 00:26:05.383 Virtualization Management: Not Supported 00:26:05.383 Doorbell Buffer Config: Not Supported 00:26:05.383 Get LBA Status Capability: Not Supported 00:26:05.383 Command & Feature Lockdown Capability: Not Supported 00:26:05.383 Abort Command Limit: 4 00:26:05.383 Async Event Request Limit: 4 00:26:05.383 Number of Firmware Slots: N/A 00:26:05.383 Firmware Slot 1 Read-Only: N/A 00:26:05.383 Firmware Activation Without Reset: N/A 00:26:05.383 Multiple Update Detection Support: N/A 00:26:05.383 Firmware Update Granularity: No Information Provided 00:26:05.383 Per-Namespace SMART Log: Yes 00:26:05.383 Asymmetric Namespace Access Log Page: Supported 00:26:05.383 ANA Transition Time : 10 sec 00:26:05.383 00:26:05.383 Asymmetric Namespace Access Capabilities 00:26:05.383 ANA Optimized State : Supported 00:26:05.383 ANA Non-Optimized State : Supported 00:26:05.383 ANA Inaccessible State : Supported 00:26:05.383 ANA Persistent Loss State : Supported 00:26:05.383 ANA Change State : Supported 00:26:05.383 ANAGRPID is not changed : No 00:26:05.383 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:05.383 00:26:05.383 ANA Group Identifier Maximum : 128 00:26:05.383 Number of ANA Group Identifiers : 128 00:26:05.383 Max Number of Allowed Namespaces : 1024 00:26:05.383 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:05.383 Command Effects Log Page: Supported 00:26:05.383 Get Log Page Extended Data: Supported 00:26:05.383 Telemetry Log Pages: Not Supported 00:26:05.383 Persistent Event Log Pages: Not Supported 00:26:05.383 Supported Log Pages Log Page: May Support 00:26:05.383 Commands Supported & Effects Log Page: Not Supported 00:26:05.383 Feature Identifiers & Effects Log Page:May Support 00:26:05.383 NVMe-MI Commands & Effects Log Page: May Support 00:26:05.383 Data Area 4 for Telemetry Log: Not Supported 00:26:05.383 Error Log Page Entries Supported: 128 00:26:05.383 Keep Alive: Supported 00:26:05.383 Keep Alive Granularity: 1000 ms 00:26:05.383 00:26:05.383 NVM Command Set Attributes 00:26:05.383 ========================== 00:26:05.383 Submission Queue Entry Size 00:26:05.383 Max: 64 00:26:05.383 Min: 64 00:26:05.383 Completion Queue Entry Size 00:26:05.383 Max: 16 00:26:05.383 Min: 16 00:26:05.383 Number of Namespaces: 1024 00:26:05.383 Compare Command: Not Supported 00:26:05.383 Write Uncorrectable Command: Not Supported 00:26:05.383 Dataset Management Command: Supported 00:26:05.383 Write Zeroes Command: Supported 00:26:05.383 Set Features Save Field: Not Supported 00:26:05.383 Reservations: Not Supported 00:26:05.383 Timestamp: Not Supported 00:26:05.383 Copy: Not Supported 00:26:05.383 Volatile Write Cache: Present 00:26:05.383 Atomic Write Unit (Normal): 1 00:26:05.383 Atomic Write Unit (PFail): 1 00:26:05.383 Atomic Compare & Write Unit: 1 00:26:05.383 Fused Compare & Write: Not Supported 00:26:05.383 Scatter-Gather List 00:26:05.383 SGL Command Set: Supported 00:26:05.383 SGL Keyed: Not Supported 00:26:05.383 SGL Bit Bucket Descriptor: Not Supported 00:26:05.383 SGL Metadata Pointer: Not Supported 00:26:05.383 Oversized SGL: Not Supported 00:26:05.383 SGL Metadata Address: Not Supported 00:26:05.383 SGL Offset: Supported 00:26:05.383 Transport SGL Data Block: Not Supported 00:26:05.383 Replay Protected Memory Block: Not Supported 00:26:05.383 00:26:05.383 Firmware Slot Information 00:26:05.383 ========================= 00:26:05.383 Active slot: 0 00:26:05.383 00:26:05.383 Asymmetric Namespace Access 00:26:05.383 =========================== 00:26:05.383 Change Count : 0 00:26:05.383 Number of ANA Group Descriptors : 1 00:26:05.383 ANA Group Descriptor : 0 00:26:05.383 ANA Group ID : 1 00:26:05.383 Number of NSID Values : 1 00:26:05.383 Change Count : 0 00:26:05.383 ANA State : 1 00:26:05.383 Namespace Identifier : 1 00:26:05.383 00:26:05.383 Commands Supported and Effects 00:26:05.383 ============================== 00:26:05.383 Admin Commands 00:26:05.383 -------------- 00:26:05.383 Get Log Page (02h): Supported 00:26:05.383 Identify (06h): Supported 00:26:05.383 Abort (08h): Supported 00:26:05.383 Set Features (09h): Supported 00:26:05.383 Get Features (0Ah): Supported 00:26:05.383 Asynchronous Event Request (0Ch): Supported 00:26:05.383 Keep Alive (18h): Supported 00:26:05.383 I/O Commands 00:26:05.383 ------------ 00:26:05.383 Flush (00h): Supported 00:26:05.383 Write (01h): Supported LBA-Change 00:26:05.383 Read (02h): Supported 00:26:05.383 Write Zeroes (08h): Supported LBA-Change 00:26:05.383 Dataset Management (09h): Supported 00:26:05.383 00:26:05.383 Error Log 00:26:05.383 ========= 00:26:05.383 Entry: 0 00:26:05.383 Error Count: 0x3 00:26:05.383 Submission Queue Id: 0x0 00:26:05.383 Command Id: 0x5 00:26:05.383 Phase Bit: 0 00:26:05.383 Status Code: 0x2 00:26:05.383 Status Code Type: 0x0 00:26:05.383 Do Not Retry: 1 00:26:05.383 Error Location: 0x28 00:26:05.383 LBA: 0x0 00:26:05.383 Namespace: 0x0 00:26:05.383 Vendor Log Page: 0x0 00:26:05.383 ----------- 00:26:05.383 Entry: 1 00:26:05.383 Error Count: 0x2 00:26:05.383 Submission Queue Id: 0x0 00:26:05.383 Command Id: 0x5 00:26:05.383 Phase Bit: 0 00:26:05.383 Status Code: 0x2 00:26:05.383 Status Code Type: 0x0 00:26:05.383 Do Not Retry: 1 00:26:05.383 Error Location: 0x28 00:26:05.383 LBA: 0x0 00:26:05.383 Namespace: 0x0 00:26:05.383 Vendor Log Page: 0x0 00:26:05.383 ----------- 00:26:05.383 Entry: 2 00:26:05.383 Error Count: 0x1 00:26:05.383 Submission Queue Id: 0x0 00:26:05.383 Command Id: 0x4 00:26:05.383 Phase Bit: 0 00:26:05.383 Status Code: 0x2 00:26:05.383 Status Code Type: 0x0 00:26:05.383 Do Not Retry: 1 00:26:05.383 Error Location: 0x28 00:26:05.383 LBA: 0x0 00:26:05.384 Namespace: 0x0 00:26:05.384 Vendor Log Page: 0x0 00:26:05.384 00:26:05.384 Number of Queues 00:26:05.384 ================ 00:26:05.384 Number of I/O Submission Queues: 128 00:26:05.384 Number of I/O Completion Queues: 128 00:26:05.384 00:26:05.384 ZNS Specific Controller Data 00:26:05.384 ============================ 00:26:05.384 Zone Append Size Limit: 0 00:26:05.384 00:26:05.384 00:26:05.384 Active Namespaces 00:26:05.384 ================= 00:26:05.384 get_feature(0x05) failed 00:26:05.384 Namespace ID:1 00:26:05.384 Command Set Identifier: NVM (00h) 00:26:05.384 Deallocate: Supported 00:26:05.384 Deallocated/Unwritten Error: Not Supported 00:26:05.384 Deallocated Read Value: Unknown 00:26:05.384 Deallocate in Write Zeroes: Not Supported 00:26:05.384 Deallocated Guard Field: 0xFFFF 00:26:05.384 Flush: Supported 00:26:05.384 Reservation: Not Supported 00:26:05.384 Namespace Sharing Capabilities: Multiple Controllers 00:26:05.384 Size (in LBAs): 1953525168 (931GiB) 00:26:05.384 Capacity (in LBAs): 1953525168 (931GiB) 00:26:05.384 Utilization (in LBAs): 1953525168 (931GiB) 00:26:05.384 UUID: b79feb5d-656b-4c3c-ba53-56595ee35a12 00:26:05.384 Thin Provisioning: Not Supported 00:26:05.384 Per-NS Atomic Units: Yes 00:26:05.384 Atomic Boundary Size (Normal): 0 00:26:05.384 Atomic Boundary Size (PFail): 0 00:26:05.384 Atomic Boundary Offset: 0 00:26:05.384 NGUID/EUI64 Never Reused: No 00:26:05.384 ANA group ID: 1 00:26:05.384 Namespace Write Protected: No 00:26:05.384 Number of LBA Formats: 1 00:26:05.384 Current LBA Format: LBA Format #00 00:26:05.384 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:05.384 00:26:05.384 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:05.384 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:05.384 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:05.384 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:05.384 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:05.384 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:05.384 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:05.384 rmmod nvme_tcp 00:26:05.384 rmmod nvme_fabrics 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.643 10:53:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:07.630 10:53:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:10.977 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:10.977 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:11.237 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:11.497 00:26:11.497 real 0m16.713s 00:26:11.497 user 0m4.344s 00:26:11.497 sys 0m8.748s 00:26:11.497 10:53:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.497 10:53:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.497 ************************************ 00:26:11.497 END TEST nvmf_identify_kernel_target 00:26:11.497 ************************************ 00:26:11.497 10:53:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:11.497 10:53:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:11.497 10:53:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.497 10:53:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.497 ************************************ 00:26:11.497 START TEST nvmf_auth_host 00:26:11.497 ************************************ 00:26:11.497 10:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:11.757 * Looking for test storage... 00:26:11.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.757 10:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:11.757 10:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:11.757 10:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.757 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:11.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.758 --rc genhtml_branch_coverage=1 00:26:11.758 --rc genhtml_function_coverage=1 00:26:11.758 --rc genhtml_legend=1 00:26:11.758 --rc geninfo_all_blocks=1 00:26:11.758 --rc geninfo_unexecuted_blocks=1 00:26:11.758 00:26:11.758 ' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:11.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.758 --rc genhtml_branch_coverage=1 00:26:11.758 --rc genhtml_function_coverage=1 00:26:11.758 --rc genhtml_legend=1 00:26:11.758 --rc geninfo_all_blocks=1 00:26:11.758 --rc geninfo_unexecuted_blocks=1 00:26:11.758 00:26:11.758 ' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:11.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.758 --rc genhtml_branch_coverage=1 00:26:11.758 --rc genhtml_function_coverage=1 00:26:11.758 --rc genhtml_legend=1 00:26:11.758 --rc geninfo_all_blocks=1 00:26:11.758 --rc geninfo_unexecuted_blocks=1 00:26:11.758 00:26:11.758 ' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:11.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.758 --rc genhtml_branch_coverage=1 00:26:11.758 --rc genhtml_function_coverage=1 00:26:11.758 --rc genhtml_legend=1 00:26:11.758 --rc geninfo_all_blocks=1 00:26:11.758 --rc geninfo_unexecuted_blocks=1 00:26:11.758 00:26:11.758 ' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:11.758 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.759 10:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:18.333 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:18.333 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.333 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:18.334 Found net devices under 0000:86:00.0: cvl_0_0 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:18.334 Found net devices under 0000:86:00.1: cvl_0_1 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:18.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:26:18.334 00:26:18.334 --- 10.0.0.2 ping statistics --- 00:26:18.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.334 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:26:18.334 00:26:18.334 --- 10.0.0.1 ping statistics --- 00:26:18.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.334 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:18.334 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1822010 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1822010 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1822010 ']' 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fda0174f235e1c002ee36a309dd884e6 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.c5q 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fda0174f235e1c002ee36a309dd884e6 0 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fda0174f235e1c002ee36a309dd884e6 0 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fda0174f235e1c002ee36a309dd884e6 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.c5q 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.c5q 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.c5q 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:18.334 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=68e585214d92649615d09a1b8857296485bacdf4c8cef090fbc5029b94852481 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vTI 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 68e585214d92649615d09a1b8857296485bacdf4c8cef090fbc5029b94852481 3 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 68e585214d92649615d09a1b8857296485bacdf4c8cef090fbc5029b94852481 3 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=68e585214d92649615d09a1b8857296485bacdf4c8cef090fbc5029b94852481 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vTI 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vTI 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.vTI 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=872afabca8311f50ec905f861d0991c33189f9127e3a7fe6 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2mh 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 872afabca8311f50ec905f861d0991c33189f9127e3a7fe6 0 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 872afabca8311f50ec905f861d0991c33189f9127e3a7fe6 0 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=872afabca8311f50ec905f861d0991c33189f9127e3a7fe6 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2mh 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2mh 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2mh 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=706404a8ae1075430edbf1cc7644c47921623fb571471efd 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.QAp 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 706404a8ae1075430edbf1cc7644c47921623fb571471efd 2 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 706404a8ae1075430edbf1cc7644c47921623fb571471efd 2 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=706404a8ae1075430edbf1cc7644c47921623fb571471efd 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.QAp 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.QAp 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.QAp 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ca146e76bb5b32729b8d8eb065a77ad 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HiQ 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ca146e76bb5b32729b8d8eb065a77ad 1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ca146e76bb5b32729b8d8eb065a77ad 1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ca146e76bb5b32729b8d8eb065a77ad 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HiQ 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HiQ 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.HiQ 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1befd60bb0e4681d3c8d613bd52e0f6e 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Hts 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1befd60bb0e4681d3c8d613bd52e0f6e 1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1befd60bb0e4681d3c8d613bd52e0f6e 1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1befd60bb0e4681d3c8d613bd52e0f6e 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Hts 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Hts 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Hts 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=81fc5ad6ae9a7515dfaa627ca55c59a37388a57b29a30d32 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uwV 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 81fc5ad6ae9a7515dfaa627ca55c59a37388a57b29a30d32 2 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 81fc5ad6ae9a7515dfaa627ca55c59a37388a57b29a30d32 2 00:26:18.335 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=81fc5ad6ae9a7515dfaa627ca55c59a37388a57b29a30d32 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uwV 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uwV 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.uwV 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=99b9a12f85fa53536748f3574aa079c0 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fCK 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 99b9a12f85fa53536748f3574aa079c0 0 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 99b9a12f85fa53536748f3574aa079c0 0 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=99b9a12f85fa53536748f3574aa079c0 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:18.336 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fCK 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fCK 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fCK 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1c0865178e4b03c9344629f5edf1c9681ed66a57bd38f492cea2065935e5f90b 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.k6N 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1c0865178e4b03c9344629f5edf1c9681ed66a57bd38f492cea2065935e5f90b 3 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1c0865178e4b03c9344629f5edf1c9681ed66a57bd38f492cea2065935e5f90b 3 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1c0865178e4b03c9344629f5edf1c9681ed66a57bd38f492cea2065935e5f90b 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.k6N 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.k6N 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.k6N 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1822010 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1822010 ']' 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.595 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.c5q 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.vTI ]] 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vTI 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2mh 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.QAp ]] 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QAp 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.HiQ 00:26:18.855 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Hts ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Hts 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.uwV 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fCK ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fCK 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.k6N 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:18.856 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:21.390 Waiting for block devices as requested 00:26:21.650 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:21.650 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:21.650 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:21.916 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:21.916 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:21.916 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:21.916 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:22.174 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:22.174 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:22.174 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:22.174 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:22.433 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:22.433 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:22.433 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:22.691 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:22.691 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:22.691 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:23.259 No valid GPT data, bailing 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:23.259 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:23.260 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:23.520 00:26:23.520 Discovery Log Number of Records 2, Generation counter 2 00:26:23.520 =====Discovery Log Entry 0====== 00:26:23.520 trtype: tcp 00:26:23.520 adrfam: ipv4 00:26:23.520 subtype: current discovery subsystem 00:26:23.520 treq: not specified, sq flow control disable supported 00:26:23.520 portid: 1 00:26:23.520 trsvcid: 4420 00:26:23.520 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:23.520 traddr: 10.0.0.1 00:26:23.520 eflags: none 00:26:23.520 sectype: none 00:26:23.520 =====Discovery Log Entry 1====== 00:26:23.520 trtype: tcp 00:26:23.520 adrfam: ipv4 00:26:23.520 subtype: nvme subsystem 00:26:23.520 treq: not specified, sq flow control disable supported 00:26:23.520 portid: 1 00:26:23.520 trsvcid: 4420 00:26:23.520 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:23.520 traddr: 10.0.0.1 00:26:23.520 eflags: none 00:26:23.520 sectype: none 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.520 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.521 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.521 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 nvme0n1 00:26:23.781 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.781 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.781 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.781 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.781 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 nvme0n1 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.781 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.041 nvme0n1 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.041 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.042 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.301 nvme0n1 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.301 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.302 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.561 nvme0n1 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.561 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.562 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.821 nvme0n1 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.821 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.822 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.081 nvme0n1 00:26:25.081 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.081 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.081 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.081 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.081 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.082 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 nvme0n1 00:26:25.341 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.341 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.341 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.342 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.601 nvme0n1 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.601 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.602 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.861 nvme0n1 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.861 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.120 nvme0n1 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.120 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.121 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.380 nvme0n1 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.380 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.381 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.381 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.640 nvme0n1 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.640 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.900 nvme0n1 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.900 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:27.159 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.160 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.420 nvme0n1 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.420 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.680 nvme0n1 00:26:27.680 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.680 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.680 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.680 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.680 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.680 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.680 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.681 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.681 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.681 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.249 nvme0n1 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.249 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.250 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.250 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.250 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:28.250 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.250 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.509 nvme0n1 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.509 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.768 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.028 nvme0n1 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.028 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.597 nvme0n1 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.597 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.598 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.857 nvme0n1 00:26:29.857 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.857 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.857 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.857 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.857 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.857 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.117 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.686 nvme0n1 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.686 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.686 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.254 nvme0n1 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.254 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.822 nvme0n1 00:26:31.822 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.822 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.822 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.822 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.822 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.822 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.081 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.650 nvme0n1 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.650 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.218 nvme0n1 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.218 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.219 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.478 nvme0n1 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.478 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.479 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.738 nvme0n1 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.738 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.998 nvme0n1 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.998 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.999 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.258 nvme0n1 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:34.258 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.259 nvme0n1 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.259 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.518 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.519 nvme0n1 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.519 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.778 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.778 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.778 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.778 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.778 nvme0n1 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.778 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.037 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.037 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.037 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.037 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.037 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.037 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.037 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:35.037 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.038 nvme0n1 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.038 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.297 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.297 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.297 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.297 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.297 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.298 nvme0n1 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.298 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:35.557 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.558 nvme0n1 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.558 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:35.558 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:35.817 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:35.817 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.817 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.817 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.817 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.818 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.818 nvme0n1 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.077 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.337 nvme0n1 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.337 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.597 nvme0n1 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.597 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.597 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.857 nvme0n1 00:26:36.857 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.857 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.857 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.857 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.857 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.857 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.857 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.857 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.117 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.377 nvme0n1 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.377 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.637 nvme0n1 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.637 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.897 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.898 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.157 nvme0n1 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.158 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.727 nvme0n1 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.727 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.727 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.728 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.987 nvme0n1 00:26:38.987 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.987 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.987 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.987 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.987 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.987 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.246 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.246 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.247 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.506 nvme0n1 00:26:39.506 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.506 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.506 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.506 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.506 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.506 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.506 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.506 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.507 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.766 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.766 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.766 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.336 nvme0n1 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.336 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.337 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.337 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.337 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.905 nvme0n1 00:26:40.905 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.905 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.905 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.905 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.905 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.905 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.905 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.906 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.475 nvme0n1 00:26:41.475 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.475 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.475 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.475 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.476 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.736 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.303 nvme0n1 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.303 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.304 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.872 nvme0n1 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.872 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.131 nvme0n1 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.131 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.390 nvme0n1 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.390 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.391 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.391 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.391 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.391 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.391 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.391 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.649 nvme0n1 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:43.649 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.650 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.650 nvme0n1 00:26:43.650 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.650 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.650 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.650 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.650 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.650 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.909 nvme0n1 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.909 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.168 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.169 nvme0n1 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.169 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 nvme0n1 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:44.428 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.688 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.688 nvme0n1 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.688 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.947 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.948 nvme0n1 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.948 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 nvme0n1 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.466 nvme0n1 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.466 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.725 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.985 nvme0n1 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.985 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.244 nvme0n1 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:46.244 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.245 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.503 nvme0n1 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:46.503 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.504 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.762 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.020 nvme0n1 00:26:47.020 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.021 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.280 nvme0n1 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.280 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.538 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.539 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.797 nvme0n1 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.797 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.798 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.364 nvme0n1 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:48.364 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.365 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.624 nvme0n1 00:26:48.624 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.624 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.624 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.624 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.624 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.624 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.882 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.141 nvme0n1 00:26:49.141 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.141 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.141 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.141 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.141 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.141 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmRhMDE3NGYyMzVlMWMwMDJlZTM2YTMwOWRkODg0ZTZf02Qq: 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: ]] 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjhlNTg1MjE0ZDkyNjQ5NjE1ZDA5YTFiODg1NzI5NjQ4NWJhY2RmNGM4Y2VmMDkwZmJjNTAyOWI5NDg1MjQ4MQgQZSk=: 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.142 10:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.783 nvme0n1 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.783 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.100 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.668 nvme0n1 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.668 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.236 nvme0n1 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.236 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODFmYzVhZDZhZTlhNzUxNWRmYWE2MjdjYTU1YzU5YTM3Mzg4YTU3YjI5YTMwZDMy/5PQLQ==: 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: ]] 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTliOWExMmY4NWZhNTM1MzY3NDhmMzU3NGFhMDc5YzBzPRFT: 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.237 10:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 nvme0n1 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWMwODY1MTc4ZTRiMDNjOTM0NDYyOWY1ZWRmMWM5NjgxZWQ2NmE1N2JkMzhmNDkyY2VhMjA2NTkzNWU1ZjkwYrriY8A=: 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.805 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.371 nvme0n1 00:26:52.371 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.371 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.371 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.371 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.371 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.371 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.630 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.631 request: 00:26:52.631 { 00:26:52.631 "name": "nvme0", 00:26:52.631 "trtype": "tcp", 00:26:52.631 "traddr": "10.0.0.1", 00:26:52.631 "adrfam": "ipv4", 00:26:52.631 "trsvcid": "4420", 00:26:52.631 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:52.631 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:52.631 "prchk_reftag": false, 00:26:52.631 "prchk_guard": false, 00:26:52.631 "hdgst": false, 00:26:52.631 "ddgst": false, 00:26:52.631 "allow_unrecognized_csi": false, 00:26:52.631 "method": "bdev_nvme_attach_controller", 00:26:52.631 "req_id": 1 00:26:52.631 } 00:26:52.631 Got JSON-RPC error response 00:26:52.631 response: 00:26:52.631 { 00:26:52.631 "code": -5, 00:26:52.631 "message": "Input/output error" 00:26:52.631 } 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.631 10:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.631 request: 00:26:52.631 { 00:26:52.631 "name": "nvme0", 00:26:52.631 "trtype": "tcp", 00:26:52.631 "traddr": "10.0.0.1", 00:26:52.631 "adrfam": "ipv4", 00:26:52.631 "trsvcid": "4420", 00:26:52.631 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:52.631 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:52.631 "prchk_reftag": false, 00:26:52.631 "prchk_guard": false, 00:26:52.631 "hdgst": false, 00:26:52.631 "ddgst": false, 00:26:52.631 "dhchap_key": "key2", 00:26:52.631 "allow_unrecognized_csi": false, 00:26:52.631 "method": "bdev_nvme_attach_controller", 00:26:52.631 "req_id": 1 00:26:52.631 } 00:26:52.631 Got JSON-RPC error response 00:26:52.631 response: 00:26:52.631 { 00:26:52.631 "code": -5, 00:26:52.631 "message": "Input/output error" 00:26:52.631 } 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.631 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.891 request: 00:26:52.891 { 00:26:52.891 "name": "nvme0", 00:26:52.891 "trtype": "tcp", 00:26:52.891 "traddr": "10.0.0.1", 00:26:52.891 "adrfam": "ipv4", 00:26:52.891 "trsvcid": "4420", 00:26:52.891 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:52.891 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:52.891 "prchk_reftag": false, 00:26:52.891 "prchk_guard": false, 00:26:52.891 "hdgst": false, 00:26:52.891 "ddgst": false, 00:26:52.891 "dhchap_key": "key1", 00:26:52.891 "dhchap_ctrlr_key": "ckey2", 00:26:52.891 "allow_unrecognized_csi": false, 00:26:52.891 "method": "bdev_nvme_attach_controller", 00:26:52.891 "req_id": 1 00:26:52.891 } 00:26:52.891 Got JSON-RPC error response 00:26:52.891 response: 00:26:52.891 { 00:26:52.891 "code": -5, 00:26:52.891 "message": "Input/output error" 00:26:52.891 } 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.891 nvme0n1 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.891 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.892 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 request: 00:26:53.151 { 00:26:53.151 "name": "nvme0", 00:26:53.151 "dhchap_key": "key1", 00:26:53.151 "dhchap_ctrlr_key": "ckey2", 00:26:53.151 "method": "bdev_nvme_set_keys", 00:26:53.151 "req_id": 1 00:26:53.151 } 00:26:53.151 Got JSON-RPC error response 00:26:53.151 response: 00:26:53.151 { 00:26:53.151 "code": -13, 00:26:53.151 "message": "Permission denied" 00:26:53.151 } 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:53.151 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:54.545 10:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.545 10:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:54.545 10:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.545 10:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.545 10:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.545 10:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:54.545 10:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODcyYWZhYmNhODMxMWY1MGVjOTA1Zjg2MWQwOTkxYzMzMTg5ZjkxMjdlM2E3ZmU20qFfug==: 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: ]] 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzA2NDA0YThhZTEwNzU0MzBlZGJmMWNjNzY0NGM0NzkyMTYyM2ZiNTcxNDcxZWZk6ohebA==: 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.483 nvme0n1 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmNhMTQ2ZTc2YmI1YjMyNzI5YjhkOGViMDY1YTc3YWSfsBYA: 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: ]] 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJlZmQ2MGJiMGU0NjgxZDNjOGQ2MTNiZDUyZTBmNmUpHEz5: 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.483 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.483 request: 00:26:55.483 { 00:26:55.483 "name": "nvme0", 00:26:55.483 "dhchap_key": "key2", 00:26:55.483 "dhchap_ctrlr_key": "ckey1", 00:26:55.483 "method": "bdev_nvme_set_keys", 00:26:55.483 "req_id": 1 00:26:55.483 } 00:26:55.483 Got JSON-RPC error response 00:26:55.483 response: 00:26:55.483 { 00:26:55.483 "code": -13, 00:26:55.483 "message": "Permission denied" 00:26:55.483 } 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:55.484 10:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.861 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.861 rmmod nvme_tcp 00:26:56.861 rmmod nvme_fabrics 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1822010 ']' 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1822010 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1822010 ']' 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1822010 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1822010 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1822010' 00:26:56.861 killing process with pid 1822010 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1822010 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1822010 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.861 10:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:59.398 10:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:01.937 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:01.937 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:02.875 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:02.875 10:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.c5q /tmp/spdk.key-null.2mh /tmp/spdk.key-sha256.HiQ /tmp/spdk.key-sha384.uwV /tmp/spdk.key-sha512.k6N /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:02.875 10:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:06.163 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:06.163 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:06.163 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:06.163 00:27:06.163 real 0m54.216s 00:27:06.163 user 0m49.044s 00:27:06.163 sys 0m12.608s 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.163 ************************************ 00:27:06.163 END TEST nvmf_auth_host 00:27:06.163 ************************************ 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.163 ************************************ 00:27:06.163 START TEST nvmf_digest 00:27:06.163 ************************************ 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:06.163 * Looking for test storage... 00:27:06.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:06.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.163 --rc genhtml_branch_coverage=1 00:27:06.163 --rc genhtml_function_coverage=1 00:27:06.163 --rc genhtml_legend=1 00:27:06.163 --rc geninfo_all_blocks=1 00:27:06.163 --rc geninfo_unexecuted_blocks=1 00:27:06.163 00:27:06.163 ' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:06.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.163 --rc genhtml_branch_coverage=1 00:27:06.163 --rc genhtml_function_coverage=1 00:27:06.163 --rc genhtml_legend=1 00:27:06.163 --rc geninfo_all_blocks=1 00:27:06.163 --rc geninfo_unexecuted_blocks=1 00:27:06.163 00:27:06.163 ' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:06.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.163 --rc genhtml_branch_coverage=1 00:27:06.163 --rc genhtml_function_coverage=1 00:27:06.163 --rc genhtml_legend=1 00:27:06.163 --rc geninfo_all_blocks=1 00:27:06.163 --rc geninfo_unexecuted_blocks=1 00:27:06.163 00:27:06.163 ' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:06.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.163 --rc genhtml_branch_coverage=1 00:27:06.163 --rc genhtml_function_coverage=1 00:27:06.163 --rc genhtml_legend=1 00:27:06.163 --rc geninfo_all_blocks=1 00:27:06.163 --rc geninfo_unexecuted_blocks=1 00:27:06.163 00:27:06.163 ' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:06.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:06.163 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:12.728 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:12.728 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.728 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:12.728 Found net devices under 0000:86:00.0: cvl_0_0 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:12.729 Found net devices under 0000:86:00.1: cvl_0_1 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:12.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:27:12.729 00:27:12.729 --- 10.0.0.2 ping statistics --- 00:27:12.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.729 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:27:12.729 00:27:12.729 --- 10.0.0.1 ping statistics --- 00:27:12.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.729 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:12.729 ************************************ 00:27:12.729 START TEST nvmf_digest_clean 00:27:12.729 ************************************ 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1836287 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1836287 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1836287 ']' 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:12.729 [2024-11-19 10:54:19.387131] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:12.729 [2024-11-19 10:54:19.387180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.729 [2024-11-19 10:54:19.468054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.729 [2024-11-19 10:54:19.507606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.729 [2024-11-19 10:54:19.507641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.729 [2024-11-19 10:54:19.507648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.729 [2024-11-19 10:54:19.507653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.729 [2024-11-19 10:54:19.507658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.729 [2024-11-19 10:54:19.508250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.729 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:12.730 null0 00:27:12.730 [2024-11-19 10:54:19.668537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.730 [2024-11-19 10:54:19.692746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1836315 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1836315 /var/tmp/bperf.sock 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1836315 ']' 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:12.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:12.730 [2024-11-19 10:54:19.747404] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:12.730 [2024-11-19 10:54:19.747448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836315 ] 00:27:12.730 [2024-11-19 10:54:19.821869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.730 [2024-11-19 10:54:19.862687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:12.730 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:12.730 10:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.730 10:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.296 nvme0n1 00:27:13.296 10:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:13.296 10:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.296 Running I/O for 2 seconds... 00:27:15.233 25715.00 IOPS, 100.45 MiB/s [2024-11-19T09:54:22.682Z] 25473.50 IOPS, 99.51 MiB/s 00:27:15.233 Latency(us) 00:27:15.233 [2024-11-19T09:54:22.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.233 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:15.233 nvme0n1 : 2.00 25491.64 99.58 0.00 0.00 5016.19 2635.69 11169.61 00:27:15.233 [2024-11-19T09:54:22.682Z] =================================================================================================================== 00:27:15.233 [2024-11-19T09:54:22.682Z] Total : 25491.64 99.58 0.00 0.00 5016.19 2635.69 11169.61 00:27:15.492 { 00:27:15.492 "results": [ 00:27:15.492 { 00:27:15.492 "job": "nvme0n1", 00:27:15.492 "core_mask": "0x2", 00:27:15.492 "workload": "randread", 00:27:15.492 "status": "finished", 00:27:15.492 "queue_depth": 128, 00:27:15.492 "io_size": 4096, 00:27:15.492 "runtime": 2.003598, 00:27:15.492 "iops": 25491.64053867093, 00:27:15.492 "mibps": 99.57672085418332, 00:27:15.492 "io_failed": 0, 00:27:15.492 "io_timeout": 0, 00:27:15.492 "avg_latency_us": 5016.185886705399, 00:27:15.492 "min_latency_us": 2635.686956521739, 00:27:15.492 "max_latency_us": 11169.613913043479 00:27:15.492 } 00:27:15.492 ], 00:27:15.492 "core_count": 1 00:27:15.492 } 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:15.492 | select(.opcode=="crc32c") 00:27:15.492 | "\(.module_name) \(.executed)"' 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1836315 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1836315 ']' 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1836315 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.492 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836315 00:27:15.751 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.751 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.751 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836315' 00:27:15.751 killing process with pid 1836315 00:27:15.751 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1836315 00:27:15.751 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.751 00:27:15.751 Latency(us) 00:27:15.751 [2024-11-19T09:54:23.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.751 [2024-11-19T09:54:23.200Z] =================================================================================================================== 00:27:15.751 [2024-11-19T09:54:23.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.751 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1836315 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1836789 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1836789 /var/tmp/bperf.sock 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1836789 ']' 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:15.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.751 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:15.751 [2024-11-19 10:54:23.160201] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:15.751 [2024-11-19 10:54:23.160251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836789 ] 00:27:15.751 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:15.751 Zero copy mechanism will not be used. 00:27:16.010 [2024-11-19 10:54:23.228936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.010 [2024-11-19 10:54:23.267531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.010 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.010 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:16.010 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:16.010 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:16.010 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:16.268 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.268 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.837 nvme0n1 00:27:16.837 10:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:16.837 10:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.837 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:16.837 Zero copy mechanism will not be used. 00:27:16.837 Running I/O for 2 seconds... 00:27:18.710 5853.00 IOPS, 731.62 MiB/s [2024-11-19T09:54:26.159Z] 5900.50 IOPS, 737.56 MiB/s 00:27:18.710 Latency(us) 00:27:18.710 [2024-11-19T09:54:26.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.710 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:18.710 nvme0n1 : 2.00 5898.50 737.31 0.00 0.00 2709.77 609.06 5043.42 00:27:18.710 [2024-11-19T09:54:26.159Z] =================================================================================================================== 00:27:18.710 [2024-11-19T09:54:26.159Z] Total : 5898.50 737.31 0.00 0.00 2709.77 609.06 5043.42 00:27:18.710 { 00:27:18.710 "results": [ 00:27:18.710 { 00:27:18.710 "job": "nvme0n1", 00:27:18.710 "core_mask": "0x2", 00:27:18.710 "workload": "randread", 00:27:18.710 "status": "finished", 00:27:18.710 "queue_depth": 16, 00:27:18.710 "io_size": 131072, 00:27:18.710 "runtime": 2.003389, 00:27:18.710 "iops": 5898.504983305788, 00:27:18.710 "mibps": 737.3131229132235, 00:27:18.710 "io_failed": 0, 00:27:18.710 "io_timeout": 0, 00:27:18.710 "avg_latency_us": 2709.7712509980097, 00:27:18.710 "min_latency_us": 609.0573913043478, 00:27:18.710 "max_latency_us": 5043.422608695652 00:27:18.710 } 00:27:18.710 ], 00:27:18.710 "core_count": 1 00:27:18.710 } 00:27:18.710 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:18.710 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:18.970 | select(.opcode=="crc32c") 00:27:18.970 | "\(.module_name) \(.executed)"' 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1836789 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1836789 ']' 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1836789 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.970 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836789 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836789' 00:27:19.229 killing process with pid 1836789 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1836789 00:27:19.229 Received shutdown signal, test time was about 2.000000 seconds 00:27:19.229 00:27:19.229 Latency(us) 00:27:19.229 [2024-11-19T09:54:26.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.229 [2024-11-19T09:54:26.678Z] =================================================================================================================== 00:27:19.229 [2024-11-19T09:54:26.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1836789 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1837477 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1837477 /var/tmp/bperf.sock 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:19.229 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1837477 ']' 00:27:19.230 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:19.230 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.230 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:19.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:19.230 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.230 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:19.230 [2024-11-19 10:54:26.630340] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:19.230 [2024-11-19 10:54:26.630390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837477 ] 00:27:19.489 [2024-11-19 10:54:26.705108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.489 [2024-11-19 10:54:26.747553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.489 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.489 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:19.489 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:19.489 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:19.489 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:19.748 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.748 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:20.007 nvme0n1 00:27:20.007 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:20.007 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:20.267 Running I/O for 2 seconds... 00:27:22.141 26701.00 IOPS, 104.30 MiB/s [2024-11-19T09:54:29.590Z] 26754.50 IOPS, 104.51 MiB/s 00:27:22.141 Latency(us) 00:27:22.141 [2024-11-19T09:54:29.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.141 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:22.141 nvme0n1 : 2.01 26755.34 104.51 0.00 0.00 4775.43 3547.49 9402.99 00:27:22.141 [2024-11-19T09:54:29.590Z] =================================================================================================================== 00:27:22.141 [2024-11-19T09:54:29.590Z] Total : 26755.34 104.51 0.00 0.00 4775.43 3547.49 9402.99 00:27:22.141 { 00:27:22.141 "results": [ 00:27:22.141 { 00:27:22.141 "job": "nvme0n1", 00:27:22.141 "core_mask": "0x2", 00:27:22.141 "workload": "randwrite", 00:27:22.141 "status": "finished", 00:27:22.141 "queue_depth": 128, 00:27:22.141 "io_size": 4096, 00:27:22.141 "runtime": 2.005917, 00:27:22.141 "iops": 26755.344313847483, 00:27:22.141 "mibps": 104.51306372596673, 00:27:22.141 "io_failed": 0, 00:27:22.141 "io_timeout": 0, 00:27:22.141 "avg_latency_us": 4775.432283586914, 00:27:22.141 "min_latency_us": 3547.4921739130436, 00:27:22.141 "max_latency_us": 9402.991304347826 00:27:22.141 } 00:27:22.141 ], 00:27:22.141 "core_count": 1 00:27:22.141 } 00:27:22.141 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:22.141 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:22.141 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:22.141 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:22.141 | select(.opcode=="crc32c") 00:27:22.141 | "\(.module_name) \(.executed)"' 00:27:22.141 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1837477 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1837477 ']' 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1837477 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837477 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837477' 00:27:22.400 killing process with pid 1837477 00:27:22.400 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1837477 00:27:22.400 Received shutdown signal, test time was about 2.000000 seconds 00:27:22.400 00:27:22.400 Latency(us) 00:27:22.400 [2024-11-19T09:54:29.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.400 [2024-11-19T09:54:29.849Z] =================================================================================================================== 00:27:22.401 [2024-11-19T09:54:29.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:22.401 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1837477 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1837954 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1837954 /var/tmp/bperf.sock 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1837954 ']' 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:22.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.660 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.660 [2024-11-19 10:54:29.966244] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:22.660 [2024-11-19 10:54:29.966292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837954 ] 00:27:22.660 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:22.660 Zero copy mechanism will not be used. 00:27:22.660 [2024-11-19 10:54:30.050718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.660 [2024-11-19 10:54:30.092374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.598 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.598 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:23.598 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:23.598 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:23.598 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:23.858 10:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.858 10:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.117 nvme0n1 00:27:24.117 10:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:24.117 10:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:24.117 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.117 Zero copy mechanism will not be used. 00:27:24.117 Running I/O for 2 seconds... 00:27:26.435 6040.00 IOPS, 755.00 MiB/s [2024-11-19T09:54:33.884Z] 6076.50 IOPS, 759.56 MiB/s 00:27:26.435 Latency(us) 00:27:26.435 [2024-11-19T09:54:33.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.435 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:26.435 nvme0n1 : 2.00 6073.61 759.20 0.00 0.00 2629.64 1951.83 8491.19 00:27:26.435 [2024-11-19T09:54:33.884Z] =================================================================================================================== 00:27:26.435 [2024-11-19T09:54:33.884Z] Total : 6073.61 759.20 0.00 0.00 2629.64 1951.83 8491.19 00:27:26.435 { 00:27:26.435 "results": [ 00:27:26.435 { 00:27:26.435 "job": "nvme0n1", 00:27:26.435 "core_mask": "0x2", 00:27:26.435 "workload": "randwrite", 00:27:26.435 "status": "finished", 00:27:26.435 "queue_depth": 16, 00:27:26.435 "io_size": 131072, 00:27:26.435 "runtime": 2.003587, 00:27:26.435 "iops": 6073.606985870841, 00:27:26.435 "mibps": 759.2008732338551, 00:27:26.435 "io_failed": 0, 00:27:26.435 "io_timeout": 0, 00:27:26.435 "avg_latency_us": 2629.6370635292096, 00:27:26.435 "min_latency_us": 1951.8330434782608, 00:27:26.435 "max_latency_us": 8491.186086956523 00:27:26.435 } 00:27:26.435 ], 00:27:26.435 "core_count": 1 00:27:26.435 } 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:26.435 | select(.opcode=="crc32c") 00:27:26.435 | "\(.module_name) \(.executed)"' 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1837954 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1837954 ']' 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1837954 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837954 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837954' 00:27:26.435 killing process with pid 1837954 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1837954 00:27:26.435 Received shutdown signal, test time was about 2.000000 seconds 00:27:26.435 00:27:26.435 Latency(us) 00:27:26.435 [2024-11-19T09:54:33.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.435 [2024-11-19T09:54:33.884Z] =================================================================================================================== 00:27:26.435 [2024-11-19T09:54:33.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.435 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1837954 00:27:26.695 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1836287 00:27:26.695 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1836287 ']' 00:27:26.695 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1836287 00:27:26.695 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:26.695 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.695 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836287 00:27:26.695 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.695 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.695 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836287' 00:27:26.695 killing process with pid 1836287 00:27:26.695 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1836287 00:27:26.695 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1836287 00:27:26.955 00:27:26.955 real 0m14.865s 00:27:26.955 user 0m28.729s 00:27:26.955 sys 0m4.582s 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.955 ************************************ 00:27:26.955 END TEST nvmf_digest_clean 00:27:26.955 ************************************ 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:26.955 ************************************ 00:27:26.955 START TEST nvmf_digest_error 00:27:26.955 ************************************ 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1838676 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1838676 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1838676 ']' 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.955 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:26.955 [2024-11-19 10:54:34.320839] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:26.955 [2024-11-19 10:54:34.320882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.955 [2024-11-19 10:54:34.400816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.215 [2024-11-19 10:54:34.442425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.215 [2024-11-19 10:54:34.442460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.215 [2024-11-19 10:54:34.442467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.215 [2024-11-19 10:54:34.442474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.215 [2024-11-19 10:54:34.442479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.215 [2024-11-19 10:54:34.443013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.215 [2024-11-19 10:54:34.511452] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.215 null0 00:27:27.215 [2024-11-19 10:54:34.607035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.215 [2024-11-19 10:54:34.631224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1838697 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1838697 /var/tmp/bperf.sock 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1838697 ']' 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:27.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.215 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.475 [2024-11-19 10:54:34.683174] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:27.475 [2024-11-19 10:54:34.683217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838697 ] 00:27:27.475 [2024-11-19 10:54:34.743458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.475 [2024-11-19 10:54:34.786690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.475 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.475 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:27.475 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:27.475 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:27.734 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:27.734 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.734 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.734 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.734 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.734 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.993 nvme0n1 00:27:27.993 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:27.993 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.993 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.993 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.993 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:27.993 10:54:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.993 Running I/O for 2 seconds... 00:27:27.993 [2024-11-19 10:54:35.423423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:27.993 [2024-11-19 10:54:35.423457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.993 [2024-11-19 10:54:35.423468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.993 [2024-11-19 10:54:35.435355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:27.993 [2024-11-19 10:54:35.435379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.993 [2024-11-19 10:54:35.435389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.447293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.447315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.447325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.455599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.455621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.455629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.468466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.468489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.468497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.479969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.479990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.479999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.488159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.488180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.488189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.499551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.499573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.499581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.507952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.507973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.507981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.518681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.518703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.518715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.527090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.527111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.527120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.536448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.536469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.536478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.547863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.547885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.547894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.559826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.559847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.559857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.567879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.567901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.567909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.577678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.577699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.577707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.590079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.590100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.590109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.600568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.600590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.600599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.610103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.610128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.610137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.621193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.621214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.253 [2024-11-19 10:54:35.621223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.253 [2024-11-19 10:54:35.631915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.253 [2024-11-19 10:54:35.631937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.254 [2024-11-19 10:54:35.631945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.254 [2024-11-19 10:54:35.641450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.254 [2024-11-19 10:54:35.641471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.254 [2024-11-19 10:54:35.641479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.254 [2024-11-19 10:54:35.650819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.254 [2024-11-19 10:54:35.650840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.254 [2024-11-19 10:54:35.650848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.254 [2024-11-19 10:54:35.660083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.254 [2024-11-19 10:54:35.660103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.254 [2024-11-19 10:54:35.660112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.254 [2024-11-19 10:54:35.669942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.254 [2024-11-19 10:54:35.669969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.254 [2024-11-19 10:54:35.669978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.254 [2024-11-19 10:54:35.678962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.254 [2024-11-19 10:54:35.678982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.254 [2024-11-19 10:54:35.678991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.254 [2024-11-19 10:54:35.688298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.254 [2024-11-19 10:54:35.688320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.254 [2024-11-19 10:54:35.688332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.254 [2024-11-19 10:54:35.698028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.254 [2024-11-19 10:54:35.698051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.254 [2024-11-19 10:54:35.698061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.708945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.708972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.708997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.718641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.718662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.718671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.729098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.729120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.729128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.738448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.738469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.738477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.747795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.747816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.747824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.757186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.757218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.757228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.766718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.766740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.766748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.776047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.776075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.776084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.785489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.785510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.785519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.794215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.794235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.794244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.803697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.803718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.803727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.813046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.813067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.813076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.822989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.514 [2024-11-19 10:54:35.823009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.514 [2024-11-19 10:54:35.823018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.514 [2024-11-19 10:54:35.831623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.831644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.831652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.841118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.841139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.841147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.851319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.851341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.851349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.861032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.861052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.861061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.869792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.869813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.869822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.879331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.879352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.879365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.889779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.889801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.889810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.899268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.899290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.899298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.910673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.910695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.910703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.919683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.919704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.919713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.931446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.931467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.931475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.943216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.943237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.943249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.515 [2024-11-19 10:54:35.952891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.515 [2024-11-19 10:54:35.952912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.515 [2024-11-19 10:54:35.952921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:35.964971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:35.964993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:35.965001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:35.973260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:35.973280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:35.973289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:35.985011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:35.985032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:35.985041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:35.998352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:35.998374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:35.998382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.008290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.008313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.008322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.016200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.016221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.016229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.026770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.026791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.026799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.037024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.037048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.037057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.045351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.045372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.045380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.056608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.056629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.056637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.067601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.067622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.067631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.076543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.076564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.076573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.088078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.088100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.088108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.100001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.100022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.100031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.108420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.108441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.108450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.120511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.120533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.120542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.131718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.131741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.131749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.140077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.140099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.140107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.151672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.151694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.151703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.162027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.162049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.162057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.170160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.170183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.170191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.182265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.182288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.182297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.194510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.776 [2024-11-19 10:54:36.194532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.776 [2024-11-19 10:54:36.194546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.776 [2024-11-19 10:54:36.207264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.777 [2024-11-19 10:54:36.207286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.777 [2024-11-19 10:54:36.207295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.777 [2024-11-19 10:54:36.220622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:28.777 [2024-11-19 10:54:36.220644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.777 [2024-11-19 10:54:36.220657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.232385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.232407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.232415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.243318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.243340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.243349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.251114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.251135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.251143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.262860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.262882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.262890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.275729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.275751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.275760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.285521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.285542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.285550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.296047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.296071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.296080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.305913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.305936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.305945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.316591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.316617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.316626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.325281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.325303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.325311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.337281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.337303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.337311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.347524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.347546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.347555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.358253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.358275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.358284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.366522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.366545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.366553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.377938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.377966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.377975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.388705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.388727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.388736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.398129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.398151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.398160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.407659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.407680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.407689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 24826.00 IOPS, 96.98 MiB/s [2024-11-19T09:54:36.597Z] [2024-11-19 10:54:36.418198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.418220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.418228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.427552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.427574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.427582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.435817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.435838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.435846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.446072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.446094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.446103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.457500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.457522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.457531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.467483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.467506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.467514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.478295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.478317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.478326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.488243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.488269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.488278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.496845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.496867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.496876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.506708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.506730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.506738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.515386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.515407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.515416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.524525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.524546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.524555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.534041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.534063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.534071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.544010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.544031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.544039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.553075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.553097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.553106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.561855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.561877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.561885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.573541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.148 [2024-11-19 10:54:36.573563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.148 [2024-11-19 10:54:36.573572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.148 [2024-11-19 10:54:36.581882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.149 [2024-11-19 10:54:36.581902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.149 [2024-11-19 10:54:36.581910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.149 [2024-11-19 10:54:36.594253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.149 [2024-11-19 10:54:36.594275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.149 [2024-11-19 10:54:36.594283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.406 [2024-11-19 10:54:36.602509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.406 [2024-11-19 10:54:36.602530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.406 [2024-11-19 10:54:36.602538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.613931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.613958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.613967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.625458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.625479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.625487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.635554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.635576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.635584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.644258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.644279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.644287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.656945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.656972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.656985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.666209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.666230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.666239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.675584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.675605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.675614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.685404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.685425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.685433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.697618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.697639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.697648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.709724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.709745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.709753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.718845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.718867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.718875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.730678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.730700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.730708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.739131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.739152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.739160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.752122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.752147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.752156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.760398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.760419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.760428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.772231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.772253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.772261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.783573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.783594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.783603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.791913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.791935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.791944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.804737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.804759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.804768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.813491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.813513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.813522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.824859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.824880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.824888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.837068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.837089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.837097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.846759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.846781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.846789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.407 [2024-11-19 10:54:36.855399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.407 [2024-11-19 10:54:36.855420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.407 [2024-11-19 10:54:36.855429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.866910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.866932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.866941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.876124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.876145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.876153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.887515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.887536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.887545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.898497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.898519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.898528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.907459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.907481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.907489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.916318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.916339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.916348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.926795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.926816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.926829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.937602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.937624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.937632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.947353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.947374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.947382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.957481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.957502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.957511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.966297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.966318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.966327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.977550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.977571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.977580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:36.990703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:36.990724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:36.990733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.003254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.003276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.003285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.014673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.014695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.014703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.024022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.024044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.024052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.033792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.033814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.033823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.045353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.045375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.045383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.054468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.054490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.054499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.062981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.063001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.063010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.073615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.073637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.073646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.083958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.083979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.083987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.094897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.094918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.094926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.667 [2024-11-19 10:54:37.103550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.667 [2024-11-19 10:54:37.103572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.667 [2024-11-19 10:54:37.103586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.116149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.116170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.116179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.127731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.127752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.127761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.136990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.137011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.137019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.146509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.146530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.146539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.154909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.154928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.154936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.166746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.166768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.166776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.178417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.178439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.187122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.187144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.187152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.200196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.200221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.200230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.208639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.208661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.208669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.220811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.220832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.220841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.233439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.233461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.233469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.241911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.241935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.241944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.252911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.252932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.252940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.264867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.264888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.264896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.273625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.273647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.273655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.285957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.285978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.285986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.927 [2024-11-19 10:54:37.294002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.927 [2024-11-19 10:54:37.294023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.927 [2024-11-19 10:54:37.294032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.928 [2024-11-19 10:54:37.304483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.928 [2024-11-19 10:54:37.304505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.928 [2024-11-19 10:54:37.304513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.928 [2024-11-19 10:54:37.315375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.928 [2024-11-19 10:54:37.315396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.928 [2024-11-19 10:54:37.315404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.928 [2024-11-19 10:54:37.327690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.928 [2024-11-19 10:54:37.327711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.928 [2024-11-19 10:54:37.327720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.928 [2024-11-19 10:54:37.336414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.928 [2024-11-19 10:54:37.336435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.928 [2024-11-19 10:54:37.336443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.928 [2024-11-19 10:54:37.348311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.928 [2024-11-19 10:54:37.348332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.928 [2024-11-19 10:54:37.348340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.928 [2024-11-19 10:54:37.356805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.928 [2024-11-19 10:54:37.356826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.928 [2024-11-19 10:54:37.356835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.928 [2024-11-19 10:54:37.369767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:29.928 [2024-11-19 10:54:37.369789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.928 [2024-11-19 10:54:37.369797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.187 [2024-11-19 10:54:37.378129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:30.187 [2024-11-19 10:54:37.378150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.187 [2024-11-19 10:54:37.378161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.187 [2024-11-19 10:54:37.389644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:30.187 [2024-11-19 10:54:37.389665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.187 [2024-11-19 10:54:37.389673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.187 [2024-11-19 10:54:37.402659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:30.187 [2024-11-19 10:54:37.402681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.187 [2024-11-19 10:54:37.402689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.187 [2024-11-19 10:54:37.413191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8eb370) 00:27:30.187 [2024-11-19 10:54:37.413211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.187 [2024-11-19 10:54:37.413219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.187 24754.00 IOPS, 96.70 MiB/s 00:27:30.187 Latency(us) 00:27:30.187 [2024-11-19T09:54:37.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.187 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:30.187 nvme0n1 : 2.01 24751.20 96.68 0.00 0.00 5165.60 2664.18 16982.37 00:27:30.187 [2024-11-19T09:54:37.636Z] =================================================================================================================== 00:27:30.187 [2024-11-19T09:54:37.636Z] Total : 24751.20 96.68 0.00 0.00 5165.60 2664.18 16982.37 00:27:30.187 { 00:27:30.187 "results": [ 00:27:30.187 { 00:27:30.187 "job": "nvme0n1", 00:27:30.187 "core_mask": "0x2", 00:27:30.187 "workload": "randread", 00:27:30.187 "status": "finished", 00:27:30.187 "queue_depth": 128, 00:27:30.187 "io_size": 4096, 00:27:30.187 "runtime": 2.007943, 00:27:30.187 "iops": 24751.200606790135, 00:27:30.187 "mibps": 96.68437737027396, 00:27:30.187 "io_failed": 0, 00:27:30.187 "io_timeout": 0, 00:27:30.187 "avg_latency_us": 5165.603898949939, 00:27:30.187 "min_latency_us": 2664.1808695652176, 00:27:30.187 "max_latency_us": 16982.372173913045 00:27:30.187 } 00:27:30.187 ], 00:27:30.187 "core_count": 1 00:27:30.187 } 00:27:30.187 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:30.187 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:30.187 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:30.187 | .driver_specific 00:27:30.187 | .nvme_error 00:27:30.187 | .status_code 00:27:30.187 | .command_transient_transport_error' 00:27:30.187 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1838697 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1838697 ']' 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1838697 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838697 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838697' 00:27:30.447 killing process with pid 1838697 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1838697 00:27:30.447 Received shutdown signal, test time was about 2.000000 seconds 00:27:30.447 00:27:30.447 Latency(us) 00:27:30.447 [2024-11-19T09:54:37.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.447 [2024-11-19T09:54:37.896Z] =================================================================================================================== 00:27:30.447 [2024-11-19T09:54:37.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1838697 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1839306 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1839306 /var/tmp/bperf.sock 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1839306 ']' 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:30.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.447 10:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:30.447 [2024-11-19 10:54:37.882954] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:30.447 [2024-11-19 10:54:37.883003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839306 ] 00:27:30.447 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:30.447 Zero copy mechanism will not be used. 00:27:30.706 [2024-11-19 10:54:37.941055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.706 [2024-11-19 10:54:37.985231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.706 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.706 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:30.706 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:30.706 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:30.965 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:30.965 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.965 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:30.965 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.965 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.965 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.223 nvme0n1 00:27:31.482 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:31.482 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.482 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:31.482 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.482 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:31.482 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.482 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:31.482 Zero copy mechanism will not be used. 00:27:31.482 Running I/O for 2 seconds... 00:27:31.482 [2024-11-19 10:54:38.800228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.482 [2024-11-19 10:54:38.800263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.482 [2024-11-19 10:54:38.800274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.482 [2024-11-19 10:54:38.805997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.482 [2024-11-19 10:54:38.806022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.482 [2024-11-19 10:54:38.806031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.482 [2024-11-19 10:54:38.813371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.482 [2024-11-19 10:54:38.813395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.482 [2024-11-19 10:54:38.813404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.482 [2024-11-19 10:54:38.820872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.482 [2024-11-19 10:54:38.820897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.482 [2024-11-19 10:54:38.820906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.482 [2024-11-19 10:54:38.828040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.482 [2024-11-19 10:54:38.828064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.482 [2024-11-19 10:54:38.828073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.482 [2024-11-19 10:54:38.834946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.482 [2024-11-19 10:54:38.834976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.482 [2024-11-19 10:54:38.834985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.482 [2024-11-19 10:54:38.839607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.482 [2024-11-19 10:54:38.839630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.482 [2024-11-19 10:54:38.839639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.482 [2024-11-19 10:54:38.846673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.846697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.846706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.854180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.854203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.854213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.862585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.862608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.862617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.871008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.871031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.871040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.878986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.879009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.879017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.887199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.887235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.894178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.894201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.894210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.901618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.901641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.901650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.909501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.909525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.909534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.916571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.916594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.916603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.922312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.922335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.922344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.483 [2024-11-19 10:54:38.927804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.483 [2024-11-19 10:54:38.927826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.483 [2024-11-19 10:54:38.927834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.743 [2024-11-19 10:54:38.933197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.743 [2024-11-19 10:54:38.933219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.743 [2024-11-19 10:54:38.933227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.743 [2024-11-19 10:54:38.938443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.743 [2024-11-19 10:54:38.938465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.743 [2024-11-19 10:54:38.938474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.945096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.945124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.945132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.950573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.950595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.950604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.955803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.955825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.955833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.961150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.961172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.961180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.966417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.966439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.966448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.971873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.971896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.971904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.977544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.977566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.977574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.982870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.982891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.982900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.988227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.988249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.988257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.993277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.993299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.993307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:38.998297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:38.998320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:38.998329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.003426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.003450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.003459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.008556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.008580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.008588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.012100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.012121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.012130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.016005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.016027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.016036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.020811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.020833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.020841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.025754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.025777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.025787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.030856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.030880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.030895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.036135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.036158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.036168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.041433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.041456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.041465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.046710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.046734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.046744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.051954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.051977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.051985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.057168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.057192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.057201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.062484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.062506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.062515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.067747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.067770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.067779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.073116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.744 [2024-11-19 10:54:39.073150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.744 [2024-11-19 10:54:39.073159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.744 [2024-11-19 10:54:39.078475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.078502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.078510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.083720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.083744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.083753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.089015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.089037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.089045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.094344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.094367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.094375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.099717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.099739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.099748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.105072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.105096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.105104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.110395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.110417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.110426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.115963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.115985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.115993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.121804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.121827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.121840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.127532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.127555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.127563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.133136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.133158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.133166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.138522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.138545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.138554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.143932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.143961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.143971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.149519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.149543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.149552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.154852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.154874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.154882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.160096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.160122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.160130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.165497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.165519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.165527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.170699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.170726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.170735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.174217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.174239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.174247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.178781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.178803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.178811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.184128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.184149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.184158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.745 [2024-11-19 10:54:39.189332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:31.745 [2024-11-19 10:54:39.189355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.745 [2024-11-19 10:54:39.189363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.194652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.194675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.194683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.199956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.199979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.199988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.205147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.205169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.205177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.210555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.210578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.210587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.215999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.216022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.216030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.221289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.221311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.221319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.226573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.226596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.226604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.231886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.231907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.231916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.237112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.237135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.237143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.242325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.242346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.242354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.247553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.247576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.247585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.252011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.005 [2024-11-19 10:54:39.252034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.005 [2024-11-19 10:54:39.252043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.005 [2024-11-19 10:54:39.256900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.256921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.256934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.261957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.261978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.261986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.267008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.267030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.267039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.272072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.272095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.272103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.277057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.277079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.277088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.282236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.282258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.282266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.287423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.287446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.287454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.292658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.292680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.292688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.297915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.297937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.297945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.303169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.303196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.303216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.308588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.308610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.308619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.313899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.313923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.313931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.319313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.319335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.319343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.324665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.324689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.324697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.329943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.329973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.329981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.335231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.335253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.335261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.340519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.340541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.340549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.345746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.345769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.345777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.350990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.006 [2024-11-19 10:54:39.351017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.006 [2024-11-19 10:54:39.351025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.006 [2024-11-19 10:54:39.356203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.356226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.356235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.361509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.361531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.361539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.366806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.366828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.366836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.372002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.372025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.372033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.377298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.377321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.377330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.382562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.382584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.382593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.387783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.387805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.387813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.393013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.393036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.393047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.398282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.398305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.398313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.403519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.403541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.403549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.408787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.408810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.408818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.414085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.414108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.414116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.419341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.419365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.419373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.424651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.424673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.424681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.429823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.429845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.429853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.435085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.435106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.435115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.440356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.440378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.440387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.445603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.445626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.445634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.007 [2024-11-19 10:54:39.450890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.007 [2024-11-19 10:54:39.450913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.007 [2024-11-19 10:54:39.450921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.456170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.456192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.456200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.461474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.461495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.461503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.466773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.466794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.466802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.472088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.472109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.472118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.477297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.477319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.477327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.482154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.482193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.482216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.487271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.487293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.487301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.492279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.492302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.492310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.497349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.497371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.497379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.502386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.502409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.502418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.507491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.507513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.507522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.512845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.512868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.512876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.518297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.518320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.518328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.523474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.523496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.523504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.528701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.528727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.528735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.533936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.533965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.533973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.539191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.539213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.539221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.544441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.544463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.544472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.549681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.549703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.549712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.554918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.554940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.554954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.560203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.560224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.560233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.565466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.268 [2024-11-19 10:54:39.565489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.268 [2024-11-19 10:54:39.565497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.268 [2024-11-19 10:54:39.570738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.570761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.570769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.576036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.576059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.576067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.581330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.581353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.581361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.586615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.586638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.586647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.591867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.591889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.591897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.597172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.597194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.597203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.602398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.602422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.602430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.607663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.607686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.607693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.612897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.612919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.612927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.618139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.618161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.618172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.623444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.623465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.623474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.628626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.628647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.628655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.633905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.633927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.633935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.639207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.639240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.639249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.644451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.644473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.644481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.649731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.649753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.649761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.654979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.655002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.655010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.660168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.660190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.660198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.665415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.665440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.665448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.670678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.670701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.670709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.675899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.675923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.675931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.681132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.681153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.681161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.686365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.686388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.686396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.691587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.691609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.691617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.696887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.696908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.696916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.702063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.702085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.702094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.707292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.269 [2024-11-19 10:54:39.707314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.269 [2024-11-19 10:54:39.707322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.269 [2024-11-19 10:54:39.712527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.270 [2024-11-19 10:54:39.712549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.270 [2024-11-19 10:54:39.712558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.717788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.717810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.717818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.723114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.723137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.723146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.728539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.728561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.728569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.733783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.733806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.733814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.739045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.739067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.739075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.744308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.744330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.744338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.749559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.749580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.749588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.754795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.754821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.754830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.760070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.760092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.760100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.765318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.765340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.765348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.770612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.770634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.770643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.775829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.775851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.775860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.781024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.781046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.781055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.786194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.786217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.786225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.791351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.791374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.791382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.530 5704.00 IOPS, 713.00 MiB/s [2024-11-19T09:54:39.979Z] [2024-11-19 10:54:39.797805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.797828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.797836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.802918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.802939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.802955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.805769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.805792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.805800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.811055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.811076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.811085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.816287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.816310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.816319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.821584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.821606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.530 [2024-11-19 10:54:39.821614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.530 [2024-11-19 10:54:39.826836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.530 [2024-11-19 10:54:39.826859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.826867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.832085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.832107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.832115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.837352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.837374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.837382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.842612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.842634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.842646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.847854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.847876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.847883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.853091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.853113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.853121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.858292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.858314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.858322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.863506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.863528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.863536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.868758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.868781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.868789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.874005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.874027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.874035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.879258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.879280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.879289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.884537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.884559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.884567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.889783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.889810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.889818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.895057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.895079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.895087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.900285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.900307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.900316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.905548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.905571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.905580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.910786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.910808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.910815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.916049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.916071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.916079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.921310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.921332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.921340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.926552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.926575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.926583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.931806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.931827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.931839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.937021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.937043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.937051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.942220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.942241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.942249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.947486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.947507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.947515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.952717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.952739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.952747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.958017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.958038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.958046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.963096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.531 [2024-11-19 10:54:39.963118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.531 [2024-11-19 10:54:39.963126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.531 [2024-11-19 10:54:39.968307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.532 [2024-11-19 10:54:39.968327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.532 [2024-11-19 10:54:39.968336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.532 [2024-11-19 10:54:39.973567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.532 [2024-11-19 10:54:39.973590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.532 [2024-11-19 10:54:39.973598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:39.978870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:39.978897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:39.978906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:39.984196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:39.984218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:39.984227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:39.989505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:39.989527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:39.989536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:39.994592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:39.994615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:39.994625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:39.999816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:39.999838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:39.999847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.005125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.005149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.005158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.010416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.010438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.010447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.015964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.016043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.016067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.021863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.021887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.021897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.027519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.027542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.027551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.033396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.033419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.033428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.038815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.038841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.038852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.044191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.044213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.044222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.049763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.049810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.049824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.056641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.056666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.056676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.061750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.061773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.061782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.067059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.067082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.067091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.072438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.072461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.072474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.077760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.077783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.077792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.083166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.083189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.083198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.088467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.088489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.088498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.093769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.093792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.093801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.099139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.099163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.792 [2024-11-19 10:54:40.099172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.792 [2024-11-19 10:54:40.105129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.792 [2024-11-19 10:54:40.105153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.105163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.110537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.110562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.110570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.115964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.115987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.115997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.121322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.121350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.121359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.126628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.126651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.126661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.131901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.131924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.131932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.137319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.137341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.137351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.142673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.142696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.142705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.147988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.148010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.148020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.153276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.153298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.153307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.158621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.158643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.158652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.163938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.163966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.163976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.169209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.169231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.169239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.174521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.174542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.174551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.179888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.179911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.179919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.184782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.184805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.184813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.189978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.190000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.190009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.195089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.195111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.195120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.200164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.200187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.200195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.205409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.205431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.205440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.210718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.210744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.210752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.214262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.214283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.214292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.218378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.218400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.218409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.223629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.223651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.223659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.228691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.228714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.228723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.233855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.233878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.233886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.793 [2024-11-19 10:54:40.239096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:32.793 [2024-11-19 10:54:40.239118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.793 [2024-11-19 10:54:40.239127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.244378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.244401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.244410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.249687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.249710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.249719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.254968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.254990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.254999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.260240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.260263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.260272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.265502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.265524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.265533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.270805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.270828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.270836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.276180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.276203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.276211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.281512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.281534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.281542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.286796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.286818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.286827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.292080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.292102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.292110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.297360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.054 [2024-11-19 10:54:40.297381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.054 [2024-11-19 10:54:40.297393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.054 [2024-11-19 10:54:40.302657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.302680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.302688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.307985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.308008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.308016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.313288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.313310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.313318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.318523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.318545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.318553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.323840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.323862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.323871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.329108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.329130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.329138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.334376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.334398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.334407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.339689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.339711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.339720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.345022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.345048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.345056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.350356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.350378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.350387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.355729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.355751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.355759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.361399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.361421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.361430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.366864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.366887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.366896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.372305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.372327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.372335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.377866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.377887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.377895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.383240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.383262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.383271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.388643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.388666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.388675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.394126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.394147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.394156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.399559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.399581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.399589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.405043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.405066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.405075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.409767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.409790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.409798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.415044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.415066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.055 [2024-11-19 10:54:40.415075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.055 [2024-11-19 10:54:40.420614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.055 [2024-11-19 10:54:40.420637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.420646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.425891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.425913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.425921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.431204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.431226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.431235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.436454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.436477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.436488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.441866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.441888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.441897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.447490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.447513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.447523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.453031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.453054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.453063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.458603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.458625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.458633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.464237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.464260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.464268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.469814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.469837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.469846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.475446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.475470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.475480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.480581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.480603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.480612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.485869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.485893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.485901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.492601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.492625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.492635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.498335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.498357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.498365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.056 [2024-11-19 10:54:40.501890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.056 [2024-11-19 10:54:40.501912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.056 [2024-11-19 10:54:40.501922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.509493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.509516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.509524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.516160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.516185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.516193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.522768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.522792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.522800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.528714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.528736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.528745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.535584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.535607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.535619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.543013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.543036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.543044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.550182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.550216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.550225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.557541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.557564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.557573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.565355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.565378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.565387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.572105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.572129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.572138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.578476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.578499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.578508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.584715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.584739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.584748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.590233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.590255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.590264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.595778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.595806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.595815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.601318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.601341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.601349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.606703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.606726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.606735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.612102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.612124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.612133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.617444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.617467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.617475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.622588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.622611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.622619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.627620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.627644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.627653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.633321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.633344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.318 [2024-11-19 10:54:40.633352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.318 [2024-11-19 10:54:40.638539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.318 [2024-11-19 10:54:40.638561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.638570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.644401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.644425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.644434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.649823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.649846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.649855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.655277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.655299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.655307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.660340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.660363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.660372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.665589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.665611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.665620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.671040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.671061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.671069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.676533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.676556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.676564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.681813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.681836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.681844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.684686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.684707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.684719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.689811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.689834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.689841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.695074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.695096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.695104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.700291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.700315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.700324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.705610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.705632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.705640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.710791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.710813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.710822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.716257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.716280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.716289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.721837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.721859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.721867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.727152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.727175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.727183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.732464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.732491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.732499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.737875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.737897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.737905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.743228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.743251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.743259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.748553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.748575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.748583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.754218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.754240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.754248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.759682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.759705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.759713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.319 [2024-11-19 10:54:40.765238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.319 [2024-11-19 10:54:40.765261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.319 [2024-11-19 10:54:40.765270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.579 [2024-11-19 10:54:40.770846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.579 [2024-11-19 10:54:40.770868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.579 [2024-11-19 10:54:40.770876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.579 [2024-11-19 10:54:40.776160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.579 [2024-11-19 10:54:40.776193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.579 [2024-11-19 10:54:40.776201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.579 [2024-11-19 10:54:40.781756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.579 [2024-11-19 10:54:40.781777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.579 [2024-11-19 10:54:40.781786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:33.579 [2024-11-19 10:54:40.787279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.579 [2024-11-19 10:54:40.787301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.579 [2024-11-19 10:54:40.787310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:33.579 [2024-11-19 10:54:40.792291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.580 [2024-11-19 10:54:40.792313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.580 [2024-11-19 10:54:40.792321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:33.580 [2024-11-19 10:54:40.797613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15fc580) 00:27:33.580 [2024-11-19 10:54:40.797636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.580 [2024-11-19 10:54:40.797644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:33.580 5722.00 IOPS, 715.25 MiB/s 00:27:33.580 Latency(us) 00:27:33.580 [2024-11-19T09:54:41.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.580 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:33.580 nvme0n1 : 2.00 5721.08 715.14 0.00 0.00 2794.04 662.48 10029.86 00:27:33.580 [2024-11-19T09:54:41.029Z] =================================================================================================================== 00:27:33.580 [2024-11-19T09:54:41.029Z] Total : 5721.08 715.14 0.00 0.00 2794.04 662.48 10029.86 00:27:33.580 { 00:27:33.580 "results": [ 00:27:33.580 { 00:27:33.580 "job": "nvme0n1", 00:27:33.580 "core_mask": "0x2", 00:27:33.580 "workload": "randread", 00:27:33.580 "status": "finished", 00:27:33.580 "queue_depth": 16, 00:27:33.580 "io_size": 131072, 00:27:33.580 "runtime": 2.003118, 00:27:33.580 "iops": 5721.080834978269, 00:27:33.580 "mibps": 715.1351043722836, 00:27:33.580 "io_failed": 0, 00:27:33.580 "io_timeout": 0, 00:27:33.580 "avg_latency_us": 2794.0393608012746, 00:27:33.580 "min_latency_us": 662.4834782608696, 00:27:33.580 "max_latency_us": 10029.857391304347 00:27:33.580 } 00:27:33.580 ], 00:27:33.580 "core_count": 1 00:27:33.580 } 00:27:33.580 10:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:33.580 10:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:33.580 10:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:33.580 | .driver_specific 00:27:33.580 | .nvme_error 00:27:33.580 | .status_code 00:27:33.580 | .command_transient_transport_error' 00:27:33.580 10:54:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:33.580 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:27:33.580 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1839306 00:27:33.580 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1839306 ']' 00:27:33.580 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1839306 00:27:33.580 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:33.839 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.839 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1839306 00:27:33.839 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.839 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.839 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1839306' 00:27:33.839 killing process with pid 1839306 00:27:33.839 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1839306 00:27:33.839 Received shutdown signal, test time was about 2.000000 seconds 00:27:33.839 00:27:33.839 Latency(us) 00:27:33.839 [2024-11-19T09:54:41.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.839 [2024-11-19T09:54:41.289Z] =================================================================================================================== 00:27:33.840 [2024-11-19T09:54:41.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1839306 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1839861 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1839861 /var/tmp/bperf.sock 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1839861 ']' 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:33.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.840 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:33.840 [2024-11-19 10:54:41.279817] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:33.840 [2024-11-19 10:54:41.279863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839861 ] 00:27:34.099 [2024-11-19 10:54:41.354648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.099 [2024-11-19 10:54:41.392597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.099 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.099 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:34.099 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:34.099 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:34.358 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:34.358 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.358 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.358 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.358 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.358 10:54:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.618 nvme0n1 00:27:34.877 10:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:34.877 10:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.877 10:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.877 10:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.877 10:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:34.877 10:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:34.877 Running I/O for 2 seconds... 00:27:34.877 [2024-11-19 10:54:42.195174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0bc0 00:27:34.877 [2024-11-19 10:54:42.196215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.877 [2024-11-19 10:54:42.196247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:34.877 [2024-11-19 10:54:42.204914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f96f8 00:27:34.877 [2024-11-19 10:54:42.206091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.877 [2024-11-19 10:54:42.206117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:34.877 [2024-11-19 10:54:42.214706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e3d08 00:27:34.877 [2024-11-19 10:54:42.216002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.877 [2024-11-19 10:54:42.216025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.223906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f9f68 00:27:34.878 [2024-11-19 10:54:42.224882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.224904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.233325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fbcf0 00:27:34.878 [2024-11-19 10:54:42.234476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.234498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.241164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f5be8 00:27:34.878 [2024-11-19 10:54:42.241644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.241665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.252804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e8088 00:27:34.878 [2024-11-19 10:54:42.254247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.254268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.259289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f5378 00:27:34.878 [2024-11-19 10:54:42.259895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.259915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.268917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6738 00:27:34.878 [2024-11-19 10:54:42.269761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.269781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.279175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ef270 00:27:34.878 [2024-11-19 10:54:42.280254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.280274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.286559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166de038 00:27:34.878 [2024-11-19 10:54:42.287163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.287183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.296808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0788 00:27:34.878 [2024-11-19 10:54:42.297395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.297415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.307878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f1ca0 00:27:34.878 [2024-11-19 10:54:42.309419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.309443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.314532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fc560 00:27:34.878 [2024-11-19 10:54:42.315245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.315265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:34.878 [2024-11-19 10:54:42.324196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6fa8 00:27:34.878 [2024-11-19 10:54:42.324925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:34.878 [2024-11-19 10:54:42.324945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.333468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f7da8 00:27:35.138 [2024-11-19 10:54:42.334281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.334300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.345027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e38d0 00:27:35.138 [2024-11-19 10:54:42.346365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.346386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.353746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fd208 00:27:35.138 [2024-11-19 10:54:42.355069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.355090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.361857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fe720 00:27:35.138 [2024-11-19 10:54:42.362581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.362601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.373516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e0630 00:27:35.138 [2024-11-19 10:54:42.374584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.374605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.383278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eee38 00:27:35.138 [2024-11-19 10:54:42.384442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.384463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.391033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ebfd0 00:27:35.138 [2024-11-19 10:54:42.391522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.391542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.400631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166df550 00:27:35.138 [2024-11-19 10:54:42.401237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.401258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.409355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e27f0 00:27:35.138 [2024-11-19 10:54:42.410317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.410336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.419028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fb048 00:27:35.138 [2024-11-19 10:54:42.420094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.420114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.428364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ee190 00:27:35.138 [2024-11-19 10:54:42.429432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.429452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.437137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e8d30 00:27:35.138 [2024-11-19 10:54:42.438052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.438072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.446517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e23b8 00:27:35.138 [2024-11-19 10:54:42.447510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.447530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.456397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166df118 00:27:35.138 [2024-11-19 10:54:42.457422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.457441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.466406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fac10 00:27:35.138 [2024-11-19 10:54:42.467465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.467485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.476321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eb760 00:27:35.138 [2024-11-19 10:54:42.477487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.477507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.483834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fd640 00:27:35.138 [2024-11-19 10:54:42.484569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.484589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.493336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6b70 00:27:35.138 [2024-11-19 10:54:42.494190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.494209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.502965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fb8b8 00:27:35.138 [2024-11-19 10:54:42.503888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.503907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.513541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e4140 00:27:35.138 [2024-11-19 10:54:42.514921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.514941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.522209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f46d0 00:27:35.138 [2024-11-19 10:54:42.523245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.523264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.530547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e01f8 00:27:35.138 [2024-11-19 10:54:42.531838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.531857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.538440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f4298 00:27:35.138 [2024-11-19 10:54:42.539138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.539157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.138 [2024-11-19 10:54:42.548683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e8088 00:27:35.138 [2024-11-19 10:54:42.549506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.138 [2024-11-19 10:54:42.549532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.139 [2024-11-19 10:54:42.558210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e3d08 00:27:35.139 [2024-11-19 10:54:42.559121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.139 [2024-11-19 10:54:42.559141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.139 [2024-11-19 10:54:42.568684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6300 00:27:35.139 [2024-11-19 10:54:42.570067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.139 [2024-11-19 10:54:42.570086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.139 [2024-11-19 10:54:42.578297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ebfd0 00:27:35.139 [2024-11-19 10:54:42.579788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.139 [2024-11-19 10:54:42.579807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.139 [2024-11-19 10:54:42.584862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eea00 00:27:35.139 [2024-11-19 10:54:42.585561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.139 [2024-11-19 10:54:42.585582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.594569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ea680 00:27:35.399 [2024-11-19 10:54:42.595307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.595326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.604228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fac10 00:27:35.399 [2024-11-19 10:54:42.605145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.605165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.613834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f4298 00:27:35.399 [2024-11-19 10:54:42.614889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.614909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.623259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f7da8 00:27:35.399 [2024-11-19 10:54:42.624294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.624313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.632871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fb8b8 00:27:35.399 [2024-11-19 10:54:42.633908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.633928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.641364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166dece0 00:27:35.399 [2024-11-19 10:54:42.642608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.642628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.649254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e1f80 00:27:35.399 [2024-11-19 10:54:42.649955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.649975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.658905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0ff8 00:27:35.399 [2024-11-19 10:54:42.659727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.659747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.669134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f2510 00:27:35.399 [2024-11-19 10:54:42.670102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.670122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.678589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ed920 00:27:35.399 [2024-11-19 10:54:42.679630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.679649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.687928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fc128 00:27:35.399 [2024-11-19 10:54:42.688995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.689015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.697119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eff18 00:27:35.399 [2024-11-19 10:54:42.698209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.698229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.706527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f1430 00:27:35.399 [2024-11-19 10:54:42.707620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.707641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.715878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e84c0 00:27:35.399 [2024-11-19 10:54:42.717016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.717036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.725563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fac10 00:27:35.399 [2024-11-19 10:54:42.726723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.726743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.734554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ebb98 00:27:35.399 [2024-11-19 10:54:42.735744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.735763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.743138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eee38 00:27:35.399 [2024-11-19 10:54:42.743956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.743976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.752445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f2d80 00:27:35.399 [2024-11-19 10:54:42.753060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.753079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.762088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f3e60 00:27:35.399 [2024-11-19 10:54:42.762815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.762834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.772722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166dfdc0 00:27:35.399 [2024-11-19 10:54:42.774228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.774247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.779170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ee190 00:27:35.399 [2024-11-19 10:54:42.779781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.399 [2024-11-19 10:54:42.779800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.399 [2024-11-19 10:54:42.788744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e88f8 00:27:35.399 [2024-11-19 10:54:42.789570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.400 [2024-11-19 10:54:42.789593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.400 [2024-11-19 10:54:42.797906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e84c0 00:27:35.400 [2024-11-19 10:54:42.798823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.400 [2024-11-19 10:54:42.798842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.400 [2024-11-19 10:54:42.807223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f6cc8 00:27:35.400 [2024-11-19 10:54:42.808139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.400 [2024-11-19 10:54:42.808158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.400 [2024-11-19 10:54:42.816221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fa7d8 00:27:35.400 [2024-11-19 10:54:42.817136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.400 [2024-11-19 10:54:42.817156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.400 [2024-11-19 10:54:42.825801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e27f0 00:27:35.400 [2024-11-19 10:54:42.826876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.400 [2024-11-19 10:54:42.826895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.400 [2024-11-19 10:54:42.835403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6fa8 00:27:35.400 [2024-11-19 10:54:42.836574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.400 [2024-11-19 10:54:42.836593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.400 [2024-11-19 10:54:42.845120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eaab8 00:27:35.400 [2024-11-19 10:54:42.846450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.400 [2024-11-19 10:54:42.846469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.855007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e3060 00:27:35.663 [2024-11-19 10:54:42.856463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.856482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.864695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fac10 00:27:35.663 [2024-11-19 10:54:42.866199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.866218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.871160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f8e88 00:27:35.663 [2024-11-19 10:54:42.871833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.871852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.880464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5a90 00:27:35.663 [2024-11-19 10:54:42.881189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.881208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.889634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166dece0 00:27:35.663 [2024-11-19 10:54:42.890353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.890373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.898837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e7818 00:27:35.663 [2024-11-19 10:54:42.899552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.899571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.908023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e12d8 00:27:35.663 [2024-11-19 10:54:42.908744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.908763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.916600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e9168 00:27:35.663 [2024-11-19 10:54:42.917300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.917319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.926830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fd640 00:27:35.663 [2024-11-19 10:54:42.927640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.927660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.936314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e1710 00:27:35.663 [2024-11-19 10:54:42.937225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.937244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.945007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e27f0 00:27:35.663 [2024-11-19 10:54:42.945933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.945955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.955230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ec840 00:27:35.663 [2024-11-19 10:54:42.956305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.956324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.964934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166df550 00:27:35.663 [2024-11-19 10:54:42.966117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.966136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.973778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fdeb0 00:27:35.663 [2024-11-19 10:54:42.974931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.974954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.983497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0788 00:27:35.663 [2024-11-19 10:54:42.984768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.984787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:42.992857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eaab8 00:27:35.663 [2024-11-19 10:54:42.994121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:42.994140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.000876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e4578 00:27:35.663 [2024-11-19 10:54:43.002180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.002200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.009491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e3498 00:27:35.663 [2024-11-19 10:54:43.010181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.010201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.018990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e88f8 00:27:35.663 [2024-11-19 10:54:43.019805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.019824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.028169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e8d30 00:27:35.663 [2024-11-19 10:54:43.029074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.029093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.037764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ee5c8 00:27:35.663 [2024-11-19 10:54:43.038797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.038816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.046314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f5378 00:27:35.663 [2024-11-19 10:54:43.047016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.047035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.055688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6b70 00:27:35.663 [2024-11-19 10:54:43.056184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.056203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.065286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eb328 00:27:35.663 [2024-11-19 10:54:43.065885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.065904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.074904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ec840 00:27:35.663 [2024-11-19 10:54:43.075639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.075660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.084290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e4de8 00:27:35.663 [2024-11-19 10:54:43.085370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.085389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.093521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e27f0 00:27:35.663 [2024-11-19 10:54:43.094561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.094580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.663 [2024-11-19 10:54:43.102715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5658 00:27:35.663 [2024-11-19 10:54:43.103799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.663 [2024-11-19 10:54:43.103819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.112359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e4140 00:27:35.955 [2024-11-19 10:54:43.113430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.113455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.121848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5ec8 00:27:35.955 [2024-11-19 10:54:43.122938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.122961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.130642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5220 00:27:35.955 [2024-11-19 10:54:43.131977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.131997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.138782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ddc00 00:27:35.955 [2024-11-19 10:54:43.139490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.139509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.149330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6738 00:27:35.955 [2024-11-19 10:54:43.150198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.150217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.160945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e12d8 00:27:35.955 [2024-11-19 10:54:43.162452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.162472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.167424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e8d30 00:27:35.955 [2024-11-19 10:54:43.168134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.168153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.176570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e88f8 00:27:35.955 [2024-11-19 10:54:43.177394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.177414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.955 27321.00 IOPS, 106.72 MiB/s [2024-11-19T09:54:43.404Z] [2024-11-19 10:54:43.186409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e73e0 00:27:35.955 [2024-11-19 10:54:43.187213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.187235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.196021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f7970 00:27:35.955 [2024-11-19 10:54:43.197055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.197075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.205808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e1710 00:27:35.955 [2024-11-19 10:54:43.207010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.207030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.214727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f6458 00:27:35.955 [2024-11-19 10:54:43.215648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.215669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.224996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e3498 00:27:35.955 [2024-11-19 10:54:43.226214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.226233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.233641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fcdd0 00:27:35.955 [2024-11-19 10:54:43.234482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.234501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.244043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ddc00 00:27:35.955 [2024-11-19 10:54:43.245386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.245406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.252681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f4298 00:27:35.955 [2024-11-19 10:54:43.254018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.254039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.262387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fd640 00:27:35.955 [2024-11-19 10:54:43.263551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.263571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.270938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fd208 00:27:35.955 [2024-11-19 10:54:43.271755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.271775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.280011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fa3a0 00:27:35.955 [2024-11-19 10:54:43.280804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.280824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.289234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fcdd0 00:27:35.955 [2024-11-19 10:54:43.290078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.290097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.299673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ddc00 00:27:35.955 [2024-11-19 10:54:43.300961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.300980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.955 [2024-11-19 10:54:43.309295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f5378 00:27:35.955 [2024-11-19 10:54:43.310679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.955 [2024-11-19 10:54:43.310699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.318906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f4298 00:27:35.956 [2024-11-19 10:54:43.320410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.320429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.325376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fd640 00:27:35.956 [2024-11-19 10:54:43.326101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.326120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.335102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f2d80 00:27:35.956 [2024-11-19 10:54:43.335908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.335927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.344741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fa7d8 00:27:35.956 [2024-11-19 10:54:43.345799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.345818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.354147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eee38 00:27:35.956 [2024-11-19 10:54:43.354760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.354784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.363774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fc128 00:27:35.956 [2024-11-19 10:54:43.364516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.364536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.373172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fe2e8 00:27:35.956 [2024-11-19 10:54:43.374165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.374185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.381769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166de8a8 00:27:35.956 [2024-11-19 10:54:43.382773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.382792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.956 [2024-11-19 10:54:43.391672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166df550 00:27:35.956 [2024-11-19 10:54:43.392895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.956 [2024-11-19 10:54:43.392915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.245 [2024-11-19 10:54:43.400652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f1430 00:27:36.245 [2024-11-19 10:54:43.401627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.245 [2024-11-19 10:54:43.401648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.245 [2024-11-19 10:54:43.410111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f1868 00:27:36.245 [2024-11-19 10:54:43.410942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.245 [2024-11-19 10:54:43.410966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.245 [2024-11-19 10:54:43.420020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ee5c8 00:27:36.245 [2024-11-19 10:54:43.421037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.245 [2024-11-19 10:54:43.421057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.245 [2024-11-19 10:54:43.429953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e4de8 00:27:36.246 [2024-11-19 10:54:43.431200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.431220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.439461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e1b48 00:27:36.246 [2024-11-19 10:54:43.440229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.440250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.448346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166dece0 00:27:36.246 [2024-11-19 10:54:43.449026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.449046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.457741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ef6a8 00:27:36.246 [2024-11-19 10:54:43.458642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.458663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.467131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e99d8 00:27:36.246 [2024-11-19 10:54:43.468003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.468023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.476415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e99d8 00:27:36.246 [2024-11-19 10:54:43.477308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.477329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.486162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f1430 00:27:36.246 [2024-11-19 10:54:43.487241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.487260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.496173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f8e88 00:27:36.246 [2024-11-19 10:54:43.497440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.497460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.504345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5ec8 00:27:36.246 [2024-11-19 10:54:43.505114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.505134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.513731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0350 00:27:36.246 [2024-11-19 10:54:43.514619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.514639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.523205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0350 00:27:36.246 [2024-11-19 10:54:43.523962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.523982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.533123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e01f8 00:27:36.246 [2024-11-19 10:54:43.534181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.534201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.542942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0bc0 00:27:36.246 [2024-11-19 10:54:43.544066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.544086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.552243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e3060 00:27:36.246 [2024-11-19 10:54:43.553331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.553350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.560190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f57b0 00:27:36.246 [2024-11-19 10:54:43.560799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.560820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.569298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e9168 00:27:36.246 [2024-11-19 10:54:43.569903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.569923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.578743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ee190 00:27:36.246 [2024-11-19 10:54:43.579222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.579242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.589313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e27f0 00:27:36.246 [2024-11-19 10:54:43.590469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.590489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.598247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e1b48 00:27:36.246 [2024-11-19 10:54:43.599237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.599261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.607031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e9e10 00:27:36.246 [2024-11-19 10:54:43.607864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.607884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.616278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e7c50 00:27:36.246 [2024-11-19 10:54:43.617247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.617267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.624902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ef270 00:27:36.246 [2024-11-19 10:54:43.625845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.625865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.634293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0350 00:27:36.246 [2024-11-19 10:54:43.635136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.635155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.643985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f3e60 00:27:36.246 [2024-11-19 10:54:43.644872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.644892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.653123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f5be8 00:27:36.246 [2024-11-19 10:54:43.654079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.654099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.662814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eaef0 00:27:36.246 [2024-11-19 10:54:43.664021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.246 [2024-11-19 10:54:43.664041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.246 [2024-11-19 10:54:43.671603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e4de8 00:27:36.247 [2024-11-19 10:54:43.672496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.247 [2024-11-19 10:54:43.672517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:36.247 [2024-11-19 10:54:43.680697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fb8b8 00:27:36.247 [2024-11-19 10:54:43.681340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.247 [2024-11-19 10:54:43.681360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:36.247 [2024-11-19 10:54:43.690242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eaef0 00:27:36.247 [2024-11-19 10:54:43.690850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.247 [2024-11-19 10:54:43.690870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.698861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f9f68 00:27:36.532 [2024-11-19 10:54:43.699476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.699496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.710125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f9f68 00:27:36.532 [2024-11-19 10:54:43.711222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.711243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.718284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fb048 00:27:36.532 [2024-11-19 10:54:43.718887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.718907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.728111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e1f80 00:27:36.532 [2024-11-19 10:54:43.728958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.728979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.739395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e1f80 00:27:36.532 [2024-11-19 10:54:43.740804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.748907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f81e0 00:27:36.532 [2024-11-19 10:54:43.750309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.750328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.757914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ee5c8 00:27:36.532 [2024-11-19 10:54:43.759541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.759560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.767811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0350 00:27:36.532 [2024-11-19 10:54:43.769315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.769334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.774277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ecc78 00:27:36.532 [2024-11-19 10:54:43.774962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.774982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.783040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0bc0 00:27:36.532 [2024-11-19 10:54:43.783702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.783722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.792652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e2c28 00:27:36.532 [2024-11-19 10:54:43.793433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.793452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.801975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e4578 00:27:36.532 [2024-11-19 10:54:43.802751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.802769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.812882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fe720 00:27:36.532 [2024-11-19 10:54:43.813941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.813965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.820670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f35f0 00:27:36.532 [2024-11-19 10:54:43.821121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.821141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.830282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166eff18 00:27:36.532 [2024-11-19 10:54:43.830850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.830870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.838935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e12d8 00:27:36.532 [2024-11-19 10:54:43.839419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.839442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.849464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ef6a8 00:27:36.532 [2024-11-19 10:54:43.850611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.850630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.858045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fc998 00:27:36.532 [2024-11-19 10:54:43.859061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.859080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.867223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fd208 00:27:36.532 [2024-11-19 10:54:43.867796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.867816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.877379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166de038 00:27:36.532 [2024-11-19 10:54:43.878638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.878658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.532 [2024-11-19 10:54:43.887001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f5be8 00:27:36.532 [2024-11-19 10:54:43.888376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.532 [2024-11-19 10:54:43.888396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.895492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5658 00:27:36.533 [2024-11-19 10:54:43.896425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.896445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.905127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ff3c8 00:27:36.533 [2024-11-19 10:54:43.906394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.906414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.913754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f2948 00:27:36.533 [2024-11-19 10:54:43.914760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.914780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.923042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5ec8 00:27:36.533 [2024-11-19 10:54:43.924008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.924031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.932031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fac10 00:27:36.533 [2024-11-19 10:54:43.933002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.933021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.943805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f2510 00:27:36.533 [2024-11-19 10:54:43.945214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.945241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.953427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f4b08 00:27:36.533 [2024-11-19 10:54:43.954964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.954983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.960046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ecc78 00:27:36.533 [2024-11-19 10:54:43.960855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.960874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:36.533 [2024-11-19 10:54:43.969911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5ec8 00:27:36.533 [2024-11-19 10:54:43.970907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.533 [2024-11-19 10:54:43.970927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:43.981620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e8d30 00:27:36.793 [2024-11-19 10:54:43.983106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:43.983125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:43.991420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fb480 00:27:36.793 [2024-11-19 10:54:43.992981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:43.993000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:43.998036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f2948 00:27:36.793 [2024-11-19 10:54:43.998890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:43.998909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.009288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f4b08 00:27:36.793 [2024-11-19 10:54:44.010537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.010557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.017601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f3e60 00:27:36.793 [2024-11-19 10:54:44.018675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.018694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.026088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fd640 00:27:36.793 [2024-11-19 10:54:44.026718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.026737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.035472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e73e0 00:27:36.793 [2024-11-19 10:54:44.036322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.036343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.044215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f31b8 00:27:36.793 [2024-11-19 10:54:44.045041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.045060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.053562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6300 00:27:36.793 [2024-11-19 10:54:44.054392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.054411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.064450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fb8b8 00:27:36.793 [2024-11-19 10:54:44.065642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.065670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.073170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166feb58 00:27:36.793 [2024-11-19 10:54:44.074355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.074375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.081680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e9168 00:27:36.793 [2024-11-19 10:54:44.082427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.082446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.091031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f6020 00:27:36.793 [2024-11-19 10:54:44.091643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.091662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.100640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fda78 00:27:36.793 [2024-11-19 10:54:44.101402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.101421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.109360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166ddc00 00:27:36.793 [2024-11-19 10:54:44.110662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.110682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.117239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f0bc0 00:27:36.793 [2024-11-19 10:54:44.117955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.117974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.126859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e5ec8 00:27:36.793 [2024-11-19 10:54:44.127703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.793 [2024-11-19 10:54:44.127722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:36.793 [2024-11-19 10:54:44.138040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f6020 00:27:36.793 [2024-11-19 10:54:44.139240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.794 [2024-11-19 10:54:44.139260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.794 [2024-11-19 10:54:44.145795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166fe2e8 00:27:36.794 [2024-11-19 10:54:44.146304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.794 [2024-11-19 10:54:44.146323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.794 [2024-11-19 10:54:44.155408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f3a28 00:27:36.794 [2024-11-19 10:54:44.156039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.794 [2024-11-19 10:54:44.156059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:36.794 [2024-11-19 10:54:44.165025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166e6b70 00:27:36.794 [2024-11-19 10:54:44.165767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.794 [2024-11-19 10:54:44.165793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.794 [2024-11-19 10:54:44.173996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166f2d80 00:27:36.794 [2024-11-19 10:54:44.175073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.794 [2024-11-19 10:54:44.175093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.794 [2024-11-19 10:54:44.183277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543640) with pdu=0x2000166df550 00:27:36.794 27384.50 IOPS, 106.97 MiB/s [2024-11-19T09:54:44.243Z] [2024-11-19 10:54:44.184288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.794 [2024-11-19 10:54:44.184306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.794 00:27:36.794 Latency(us) 00:27:36.794 [2024-11-19T09:54:44.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.794 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:36.794 nvme0n1 : 2.00 27389.75 106.99 0.00 0.00 4667.17 1823.61 14132.98 00:27:36.794 [2024-11-19T09:54:44.243Z] =================================================================================================================== 00:27:36.794 [2024-11-19T09:54:44.243Z] Total : 27389.75 106.99 0.00 0.00 4667.17 1823.61 14132.98 00:27:36.794 { 00:27:36.794 "results": [ 00:27:36.794 { 00:27:36.794 "job": "nvme0n1", 00:27:36.794 "core_mask": "0x2", 00:27:36.794 "workload": "randwrite", 00:27:36.794 "status": "finished", 00:27:36.794 "queue_depth": 128, 00:27:36.794 "io_size": 4096, 00:27:36.794 "runtime": 2.00429, 00:27:36.794 "iops": 27389.74898841984, 00:27:36.794 "mibps": 106.991206986015, 00:27:36.794 "io_failed": 0, 00:27:36.794 "io_timeout": 0, 00:27:36.794 "avg_latency_us": 4667.171456933974, 00:27:36.794 "min_latency_us": 1823.6104347826088, 00:27:36.794 "max_latency_us": 14132.980869565217 00:27:36.794 } 00:27:36.794 ], 00:27:36.794 "core_count": 1 00:27:36.794 } 00:27:36.794 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:36.794 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:36.794 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:36.794 | .driver_specific 00:27:36.794 | .nvme_error 00:27:36.794 | .status_code 00:27:36.794 | .command_transient_transport_error' 00:27:36.794 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1839861 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1839861 ']' 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1839861 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1839861 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1839861' 00:27:37.054 killing process with pid 1839861 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1839861 00:27:37.054 Received shutdown signal, test time was about 2.000000 seconds 00:27:37.054 00:27:37.054 Latency(us) 00:27:37.054 [2024-11-19T09:54:44.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.054 [2024-11-19T09:54:44.503Z] =================================================================================================================== 00:27:37.054 [2024-11-19T09:54:44.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.054 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1839861 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1840343 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1840343 /var/tmp/bperf.sock 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1840343 ']' 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:37.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.313 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.313 [2024-11-19 10:54:44.652072] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:37.313 [2024-11-19 10:54:44.652119] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1840343 ] 00:27:37.313 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.313 Zero copy mechanism will not be used. 00:27:37.313 [2024-11-19 10:54:44.725356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.573 [2024-11-19 10:54:44.766257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.573 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.573 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:37.573 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.573 10:54:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.832 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:37.832 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.832 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.832 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.832 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.832 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.092 nvme0n1 00:27:38.092 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:38.092 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.092 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.092 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.092 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:38.092 10:54:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:38.092 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:38.092 Zero copy mechanism will not be used. 00:27:38.092 Running I/O for 2 seconds... 00:27:38.092 [2024-11-19 10:54:45.487935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.488028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.488056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.493095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.493158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.493182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.497980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.498037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.498058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.502478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.502549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.502570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.506968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.507026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.507045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.511672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.511734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.511753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.516259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.516320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.516339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.520844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.520909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.520928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.525289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.525350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.525370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.529774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.529843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.529862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.534313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.534373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.534392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.092 [2024-11-19 10:54:45.539005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.092 [2024-11-19 10:54:45.539065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.092 [2024-11-19 10:54:45.539084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.352 [2024-11-19 10:54:45.543469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.352 [2024-11-19 10:54:45.543538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.352 [2024-11-19 10:54:45.543558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.547939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.548016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.548038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.552400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.552469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.552489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.556783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.556849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.556867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.561208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.561266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.561285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.565580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.565651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.565670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.569874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.569967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.569987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.574233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.574352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.574371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.578664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.578733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.578752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.583045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.583104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.583122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.587311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.587377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.587403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.591628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.591683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.591702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.596107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.596167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.596187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.600437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.600514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.600532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.605133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.605204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.605224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.610106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.610195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.610215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.615516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.615574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.615593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.621081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.621136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.621156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.626549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.626604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.626624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.632084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.632186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.632205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.637669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.637729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.637748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.642794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.642871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.642890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.647604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.647657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.647675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.652241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.652306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.652324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.656592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.656654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.656673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.661026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.661099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.661118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.665618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.353 [2024-11-19 10:54:45.665673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.353 [2024-11-19 10:54:45.665691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.353 [2024-11-19 10:54:45.670241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.670300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.670318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.674884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.674982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.675001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.679494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.679575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.679594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.683748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.683809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.683828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.688023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.688086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.688105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.692321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.692422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.692441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.696644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.696707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.696726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.700886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.700954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.700973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.705371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.705468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.705487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.710179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.710240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.710263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.714446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.714507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.714527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.718711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.718771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.718789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.723023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.723084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.723102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.727240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.727294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.727312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.731485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.731546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.731564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.735745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.735810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.735829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.740042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.740107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.740125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.744407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.744484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.744504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.748767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.748828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.748847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.753633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.753735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.753755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.758775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.758877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.758897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.765679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.765798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.765817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.772314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.772447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.772466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.778489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.778658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.778677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.784932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.785068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.785086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.791238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.791389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.791408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.354 [2024-11-19 10:54:45.797719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.354 [2024-11-19 10:54:45.797881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.354 [2024-11-19 10:54:45.797900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.804344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.804499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.804518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.811004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.811154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.811174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.817241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.817407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.817426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.823806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.823959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.823979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.829556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.829912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.829933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.835703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.836021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.836041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.842339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.842658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.842679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.849668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.849802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.849821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.856372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.856703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.856729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.863540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.863875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.863896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.870283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.870513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.870533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.877305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.877641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.877662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.884290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.884638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.884658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.892414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.892678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.892698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.900080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.900343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.900364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.907671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.907922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.907944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.915440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.915706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.915727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.922979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.923323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.923344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.930321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.930641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.930662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.938473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.938774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.938795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.945918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.946230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.946251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.953138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.953437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.953458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.960709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.961014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.961036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.967661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.967964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.967985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.974666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.615 [2024-11-19 10:54:45.974968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.615 [2024-11-19 10:54:45.974989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.615 [2024-11-19 10:54:45.981860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:45.982147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:45.982168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:45.988760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:45.988995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:45.989015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:45.995400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:45.995681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:45.995703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.001904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.002137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.002159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.008185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.008459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.008480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.014973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.015263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.015284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.021598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.021880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.021902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.029023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.029354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.029374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.036548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.036851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.036871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.044249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.044569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.044595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.052031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.052340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.052361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.616 [2024-11-19 10:54:46.058962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.616 [2024-11-19 10:54:46.059222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.616 [2024-11-19 10:54:46.059243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.064666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.064953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.064975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.071056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.071282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.071302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.075806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.076036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.076056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.080334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.080560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.080580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.084654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.084877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.084897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.089109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.089334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.089354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.093663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.093893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.093914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.098268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.098493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.098513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.102755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.103002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.103024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.107362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.107586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.107607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.111879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.112119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.112139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.116443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.116669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.116689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.120854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.121085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.121106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.125427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.125652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.125672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.129979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.130215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.130235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.134514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.134737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.134758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.139051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.139274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.139294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.143391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.143614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.143635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.147766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.147997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.148017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.152405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.152634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.152654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.157502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.157724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.157745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.162956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.163213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.163232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.169100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.169371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.169392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.175994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.176273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.876 [2024-11-19 10:54:46.176298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.876 [2024-11-19 10:54:46.183180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.876 [2024-11-19 10:54:46.183435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.183455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.190537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.190836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.190857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.197492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.197787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.197808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.204827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.205134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.205155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.211697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.211985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.212006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.218985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.219274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.219296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.226493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.226834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.226856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.234028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.234327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.234348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.241081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.241407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.241428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.248865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.249247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.249268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.256100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.256409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.256431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.264051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.264341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.264364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.271343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.271646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.271670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.279174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.279493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.279515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.286552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.286856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.286878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.293486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.293803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.293824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.300928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.301270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.301293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.308529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.308799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.308822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.315211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.315454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.315475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:38.877 [2024-11-19 10:54:46.322025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:38.877 [2024-11-19 10:54:46.322288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.877 [2024-11-19 10:54:46.322311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.328854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.329117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.329138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.335585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.335853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.335875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.342525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.342806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.342827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.349285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.349570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.349592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.355996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.356255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.356277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.363356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.363658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.363683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.370353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.370657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.370678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.377528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.377861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.377884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.137 [2024-11-19 10:54:46.384825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.137 [2024-11-19 10:54:46.385126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.137 [2024-11-19 10:54:46.385147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.391773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.392071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.392094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.398546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.398795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.398817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.405611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.405905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.405926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.412428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.412729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.412751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.419357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.419644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.419665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.425913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.426272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.426293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.433086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.433385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.433406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.439736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.440044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.440066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.446769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.447072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.447094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.453080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.453305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.453326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.460054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.460339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.460361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.466705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.467006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.467027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.473345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.473638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.473661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.480808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.481100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.481122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.138 5258.00 IOPS, 657.25 MiB/s [2024-11-19T09:54:46.587Z] [2024-11-19 10:54:46.488164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.488461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.488482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.494860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.495156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.495178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.501759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.502072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.502093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.509490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.509840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.509863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.517058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.517361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.517382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.524727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.524962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.524984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.531451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.531703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.531724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.538257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.538575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.538596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.544643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.544928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.544960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.551076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.551365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.551385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.557562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.557808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.557829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.564731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.565016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.565037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.138 [2024-11-19 10:54:46.571776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.138 [2024-11-19 10:54:46.572062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.138 [2024-11-19 10:54:46.572083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.139 [2024-11-19 10:54:46.579363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.139 [2024-11-19 10:54:46.579617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.139 [2024-11-19 10:54:46.579637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.139 [2024-11-19 10:54:46.585237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.585467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.585489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.589856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.590094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.590115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.594351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.594581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.594602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.598541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.598770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.598790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.602623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.602850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.602869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.606744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.606974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.606995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.610849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.611081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.611102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.614928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.615164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.615184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.619003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.619230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.619250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.623046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.623271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.623293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.627114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.627341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.627362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.631140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.631367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.631387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.635197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.635423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.635443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.639344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.639569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.639589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.643429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.643656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.643676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.647528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.647754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.647774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.651589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.651812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.651832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.655697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.655924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.655944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.659840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.660069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.660090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.663980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.664208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.664228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.668083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.668312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.668337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.672254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.672481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.672502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.676291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.676516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.676537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.680352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.680575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.680595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.684370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.684594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.684614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.688696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.688919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.688939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.693215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.693441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.693461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.698220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.698445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.698466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.703317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.703545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.703566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.708735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.708964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.708991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.713663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.400 [2024-11-19 10:54:46.713884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.400 [2024-11-19 10:54:46.713904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.400 [2024-11-19 10:54:46.718260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.718488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.718508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.722686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.722914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.722935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.726882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.727110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.727130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.731053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.731278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.731299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.735229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.735454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.735475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.739391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.739620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.739640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.743538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.743769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.743789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.747702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.747932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.747960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.751870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.752105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.752124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.756141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.756373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.756394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.760718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.760956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.760977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.765277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.765502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.765522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.769556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.769782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.769802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.773837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.774066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.774088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.778111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.778336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.778357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.782340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.782575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.782595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.786752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.787006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.787026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.792195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.792489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.792511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.798377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.798603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.798624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.803429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.803657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.803678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.808468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.808695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.808715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.813301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.813535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.817898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.818128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.818149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.822813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.823063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.823084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.829278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.829552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.829576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.834751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.834980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.835001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.840072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.840307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.840329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.401 [2024-11-19 10:54:46.844564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.401 [2024-11-19 10:54:46.844795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.401 [2024-11-19 10:54:46.844816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.849939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.850227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.850248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.855939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.856172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.856194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.860862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.861098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.861120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.865858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.866090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.866112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.870790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.871022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.871043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.875608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.875842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.875863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.880454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.880689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.880709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.885304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.885530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.885551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.890161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.890439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.890460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.894934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.895170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.895190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.899716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.899963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.899984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.904576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.904815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.904836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.909251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.909483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.909504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.913989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.914223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.914244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.918516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.918745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.918766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.923251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.923484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.923505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.928154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.928393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.928414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.932839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.933071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.933092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.937617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.937846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.937867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.943033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.943267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.943287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.948459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.948688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.948708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.953433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.953659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.662 [2024-11-19 10:54:46.953679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.662 [2024-11-19 10:54:46.958085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.662 [2024-11-19 10:54:46.958312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.958337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:46.963052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:46.963282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.963302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:46.968103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:46.968333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.968354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:46.973236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:46.973462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.973483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:46.978193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:46.978425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.978445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:46.983197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:46.983428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.983448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:46.988278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:46.988512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.988533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:46.993201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:46.993444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.993465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:46.998067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:46.998297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:46.998318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.003008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.003242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.003262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.007937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.008190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.008212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.012984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.013220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.013241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.018104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.018346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.018366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.023262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.023577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.023598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.029242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.029562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.029583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.034738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.034993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.035014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.039768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.040000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.040021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.044332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.044558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.044579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.049763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.050051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.050072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.055508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.055575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.055594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.061674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.061847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.061866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.068173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.068351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.068369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.074397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.074538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.074558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.080706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.080862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.087061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.087207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.087226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.093378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.093512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.093530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.099861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.100043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.663 [2024-11-19 10:54:47.100066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.663 [2024-11-19 10:54:47.105994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.663 [2024-11-19 10:54:47.106123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.664 [2024-11-19 10:54:47.106143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.924 [2024-11-19 10:54:47.112333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.924 [2024-11-19 10:54:47.112465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.924 [2024-11-19 10:54:47.112484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.924 [2024-11-19 10:54:47.118801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.924 [2024-11-19 10:54:47.118976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.924 [2024-11-19 10:54:47.118995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.924 [2024-11-19 10:54:47.125161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.924 [2024-11-19 10:54:47.125298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.924 [2024-11-19 10:54:47.125317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.924 [2024-11-19 10:54:47.131919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.924 [2024-11-19 10:54:47.132084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.924 [2024-11-19 10:54:47.132102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.924 [2024-11-19 10:54:47.139321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.924 [2024-11-19 10:54:47.139496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.924 [2024-11-19 10:54:47.139514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.924 [2024-11-19 10:54:47.145387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.924 [2024-11-19 10:54:47.145438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.924 [2024-11-19 10:54:47.145456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.924 [2024-11-19 10:54:47.150488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.150542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.150560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.155823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.155888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.155906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.161429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.161486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.161505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.166383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.166434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.166452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.171322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.171372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.171390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.176695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.176747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.176766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.181852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.181907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.181926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.186830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.186881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.186900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.191634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.191692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.191710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.196282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.196334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.196352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.200943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.201003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.201021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.205644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.205709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.205728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.210381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.210442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.210460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.215290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.215351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.215370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.220877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.220987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.221005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.225678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.225744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.225761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.230378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.230428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.230446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.236169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.236237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.236256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.242344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.242450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.242472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.249236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.249343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.249363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.255940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.256088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.256124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.263460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.263642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.263662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.271094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.271206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.271226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.276975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.277033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.277052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.282329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.282394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.282413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.287620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.287673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.287692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.292896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.292987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.293007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.298290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.298350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.298369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.303275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.303328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.303347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.308015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.308071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.308090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.312568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.312623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.312645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.317134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.317187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.317206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.321919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.321977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.321996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.326704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.326757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.326776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.331436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.331505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.331524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.336320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.336376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.336395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.341087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.341146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.341165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.345759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.345814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.345833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.350505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.350580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.350599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.355193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.925 [2024-11-19 10:54:47.355265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.925 [2024-11-19 10:54:47.355284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:39.925 [2024-11-19 10:54:47.359870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.926 [2024-11-19 10:54:47.359927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.926 [2024-11-19 10:54:47.359946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:39.926 [2024-11-19 10:54:47.364816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.926 [2024-11-19 10:54:47.364879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.926 [2024-11-19 10:54:47.364897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:39.926 [2024-11-19 10:54:47.369603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:39.926 [2024-11-19 10:54:47.369658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.926 [2024-11-19 10:54:47.369676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.374340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.374395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.374414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.379054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.379107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.379129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.383887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.383967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.388826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.388905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.388924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.394133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.394195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.394214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.399219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.399270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.399289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.404335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.404388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.404407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.409876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.409929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.409953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.415445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.415503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.415521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.420518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.420579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.420597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.425648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.425723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.425743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.430282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.430341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.430359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.434860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.434914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.434933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.439258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.439311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.439330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.443780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.443833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.443852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.448407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.448459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.448478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.452984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.453078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.453096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.457448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.457501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.457520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.461728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.461782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.461801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.466230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.466301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.466319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.470760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.470813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.470832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.475616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.475680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.475699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.480894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.186 [2024-11-19 10:54:47.480955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.186 [2024-11-19 10:54:47.480974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:40.186 [2024-11-19 10:54:47.485492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.187 [2024-11-19 10:54:47.485555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.187 [2024-11-19 10:54:47.485574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:40.187 5663.50 IOPS, 707.94 MiB/s [2024-11-19T09:54:47.636Z] [2024-11-19 10:54:47.490970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x543b20) with pdu=0x2000166ff3c8 00:27:40.187 [2024-11-19 10:54:47.491027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.187 [2024-11-19 10:54:47.491046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:40.187 00:27:40.187 Latency(us) 00:27:40.187 [2024-11-19T09:54:47.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.187 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:40.187 nvme0n1 : 2.00 5662.45 707.81 0.00 0.00 2820.88 1852.10 8092.27 00:27:40.187 [2024-11-19T09:54:47.636Z] =================================================================================================================== 00:27:40.187 [2024-11-19T09:54:47.636Z] Total : 5662.45 707.81 0.00 0.00 2820.88 1852.10 8092.27 00:27:40.187 { 00:27:40.187 "results": [ 00:27:40.187 { 00:27:40.187 "job": "nvme0n1", 00:27:40.187 "core_mask": "0x2", 00:27:40.187 "workload": "randwrite", 00:27:40.187 "status": "finished", 00:27:40.187 "queue_depth": 16, 00:27:40.187 "io_size": 131072, 00:27:40.187 "runtime": 2.004079, 00:27:40.187 "iops": 5662.451430307887, 00:27:40.187 "mibps": 707.8064287884858, 00:27:40.187 "io_failed": 0, 00:27:40.187 "io_timeout": 0, 00:27:40.187 "avg_latency_us": 2820.877303949365, 00:27:40.187 "min_latency_us": 1852.104347826087, 00:27:40.187 "max_latency_us": 8092.271304347826 00:27:40.187 } 00:27:40.187 ], 00:27:40.187 "core_count": 1 00:27:40.187 } 00:27:40.187 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:40.187 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:40.187 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:40.187 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:40.187 | .driver_specific 00:27:40.187 | .nvme_error 00:27:40.187 | .status_code 00:27:40.187 | .command_transient_transport_error' 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 367 > 0 )) 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1840343 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1840343 ']' 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1840343 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1840343 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1840343' 00:27:40.447 killing process with pid 1840343 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1840343 00:27:40.447 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.447 00:27:40.447 Latency(us) 00:27:40.447 [2024-11-19T09:54:47.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.447 [2024-11-19T09:54:47.896Z] =================================================================================================================== 00:27:40.447 [2024-11-19T09:54:47.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.447 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1840343 00:27:40.706 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1838676 00:27:40.706 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1838676 ']' 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1838676 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838676 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838676' 00:27:40.707 killing process with pid 1838676 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1838676 00:27:40.707 10:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1838676 00:27:40.707 00:27:40.707 real 0m13.851s 00:27:40.707 user 0m26.670s 00:27:40.707 sys 0m4.373s 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.707 ************************************ 00:27:40.707 END TEST nvmf_digest_error 00:27:40.707 ************************************ 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.707 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.966 rmmod nvme_tcp 00:27:40.966 rmmod nvme_fabrics 00:27:40.966 rmmod nvme_keyring 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1838676 ']' 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1838676 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1838676 ']' 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1838676 00:27:40.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1838676) - No such process 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1838676 is not found' 00:27:40.966 Process with pid 1838676 is not found 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.966 10:54:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:42.886 00:27:42.886 real 0m37.091s 00:27:42.886 user 0m57.290s 00:27:42.886 sys 0m13.469s 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:42.886 ************************************ 00:27:42.886 END TEST nvmf_digest 00:27:42.886 ************************************ 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.886 10:54:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.147 ************************************ 00:27:43.147 START TEST nvmf_bdevperf 00:27:43.147 ************************************ 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:43.147 * Looking for test storage... 00:27:43.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:43.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.147 --rc genhtml_branch_coverage=1 00:27:43.147 --rc genhtml_function_coverage=1 00:27:43.147 --rc genhtml_legend=1 00:27:43.147 --rc geninfo_all_blocks=1 00:27:43.147 --rc geninfo_unexecuted_blocks=1 00:27:43.147 00:27:43.147 ' 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:43.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.147 --rc genhtml_branch_coverage=1 00:27:43.147 --rc genhtml_function_coverage=1 00:27:43.147 --rc genhtml_legend=1 00:27:43.147 --rc geninfo_all_blocks=1 00:27:43.147 --rc geninfo_unexecuted_blocks=1 00:27:43.147 00:27:43.147 ' 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:43.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.147 --rc genhtml_branch_coverage=1 00:27:43.147 --rc genhtml_function_coverage=1 00:27:43.147 --rc genhtml_legend=1 00:27:43.147 --rc geninfo_all_blocks=1 00:27:43.147 --rc geninfo_unexecuted_blocks=1 00:27:43.147 00:27:43.147 ' 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:43.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.147 --rc genhtml_branch_coverage=1 00:27:43.147 --rc genhtml_function_coverage=1 00:27:43.147 --rc genhtml_legend=1 00:27:43.147 --rc geninfo_all_blocks=1 00:27:43.147 --rc geninfo_unexecuted_blocks=1 00:27:43.147 00:27:43.147 ' 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.147 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.148 10:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:49.718 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:49.719 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:49.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:49.719 Found net devices under 0000:86:00.0: cvl_0_0 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:49.719 Found net devices under 0000:86:00.1: cvl_0_1 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:49.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:27:49.719 00:27:49.719 --- 10.0.0.2 ping statistics --- 00:27:49.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.719 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:49.719 00:27:49.719 --- 10.0.0.1 ping statistics --- 00:27:49.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.719 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1844393 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1844393 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1844393 ']' 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.719 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.719 [2024-11-19 10:54:56.552118] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:49.720 [2024-11-19 10:54:56.552166] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.720 [2024-11-19 10:54:56.633286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:49.720 [2024-11-19 10:54:56.675606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.720 [2024-11-19 10:54:56.675645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.720 [2024-11-19 10:54:56.675652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.720 [2024-11-19 10:54:56.675658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.720 [2024-11-19 10:54:56.675664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.720 [2024-11-19 10:54:56.677001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.720 [2024-11-19 10:54:56.677114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.720 [2024-11-19 10:54:56.677115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.720 [2024-11-19 10:54:56.812394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.720 Malloc0 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.720 [2024-11-19 10:54:56.872017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:49.720 { 00:27:49.720 "params": { 00:27:49.720 "name": "Nvme$subsystem", 00:27:49.720 "trtype": "$TEST_TRANSPORT", 00:27:49.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.720 "adrfam": "ipv4", 00:27:49.720 "trsvcid": "$NVMF_PORT", 00:27:49.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.720 "hdgst": ${hdgst:-false}, 00:27:49.720 "ddgst": ${ddgst:-false} 00:27:49.720 }, 00:27:49.720 "method": "bdev_nvme_attach_controller" 00:27:49.720 } 00:27:49.720 EOF 00:27:49.720 )") 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:49.720 10:54:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:49.720 "params": { 00:27:49.720 "name": "Nvme1", 00:27:49.720 "trtype": "tcp", 00:27:49.720 "traddr": "10.0.0.2", 00:27:49.720 "adrfam": "ipv4", 00:27:49.720 "trsvcid": "4420", 00:27:49.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:49.720 "hdgst": false, 00:27:49.720 "ddgst": false 00:27:49.720 }, 00:27:49.720 "method": "bdev_nvme_attach_controller" 00:27:49.720 }' 00:27:49.720 [2024-11-19 10:54:56.923777] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:49.720 [2024-11-19 10:54:56.923820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844592 ] 00:27:49.720 [2024-11-19 10:54:56.998116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.720 [2024-11-19 10:54:57.039511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.979 Running I/O for 1 seconds... 00:27:51.358 10965.00 IOPS, 42.83 MiB/s 00:27:51.358 Latency(us) 00:27:51.358 [2024-11-19T09:54:58.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.358 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:51.358 Verification LBA range: start 0x0 length 0x4000 00:27:51.358 Nvme1n1 : 1.01 10973.65 42.87 0.00 0.00 11623.38 2336.50 11796.48 00:27:51.358 [2024-11-19T09:54:58.807Z] =================================================================================================================== 00:27:51.358 [2024-11-19T09:54:58.807Z] Total : 10973.65 42.87 0.00 0.00 11623.38 2336.50 11796.48 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1844826 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:51.358 { 00:27:51.358 "params": { 00:27:51.358 "name": "Nvme$subsystem", 00:27:51.358 "trtype": "$TEST_TRANSPORT", 00:27:51.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.358 "adrfam": "ipv4", 00:27:51.358 "trsvcid": "$NVMF_PORT", 00:27:51.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.358 "hdgst": ${hdgst:-false}, 00:27:51.358 "ddgst": ${ddgst:-false} 00:27:51.358 }, 00:27:51.358 "method": "bdev_nvme_attach_controller" 00:27:51.358 } 00:27:51.358 EOF 00:27:51.358 )") 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:51.358 10:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:51.358 "params": { 00:27:51.358 "name": "Nvme1", 00:27:51.358 "trtype": "tcp", 00:27:51.358 "traddr": "10.0.0.2", 00:27:51.358 "adrfam": "ipv4", 00:27:51.358 "trsvcid": "4420", 00:27:51.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.358 "hdgst": false, 00:27:51.358 "ddgst": false 00:27:51.358 }, 00:27:51.358 "method": "bdev_nvme_attach_controller" 00:27:51.358 }' 00:27:51.358 [2024-11-19 10:54:58.579786] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:51.358 [2024-11-19 10:54:58.579836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844826 ] 00:27:51.358 [2024-11-19 10:54:58.656540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.358 [2024-11-19 10:54:58.694761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.617 Running I/O for 15 seconds... 00:27:53.938 10994.00 IOPS, 42.95 MiB/s [2024-11-19T09:55:01.650Z] 11028.00 IOPS, 43.08 MiB/s [2024-11-19T09:55:01.650Z] 10:55:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1844393 00:27:54.201 10:55:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:54.201 [2024-11-19 10:55:01.549736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.201 [2024-11-19 10:55:01.549775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.201 [2024-11-19 10:55:01.549792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.201 [2024-11-19 10:55:01.549800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.549990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.549999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.202 [2024-11-19 10:55:01.550461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.202 [2024-11-19 10:55:01.550468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.550939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.550945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.551092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.551101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.551110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.551116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.551125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.551132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.551140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.551148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.551157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.551163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.551171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.551180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.551187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.551194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.203 [2024-11-19 10:55:01.551203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.203 [2024-11-19 10:55:01.551210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.204 [2024-11-19 10:55:01.551673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.204 [2024-11-19 10:55:01.551809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.204 [2024-11-19 10:55:01.551817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.205 [2024-11-19 10:55:01.551824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.551833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.205 [2024-11-19 10:55:01.551840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.551848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.205 [2024-11-19 10:55:01.551854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.551862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.205 [2024-11-19 10:55:01.551869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.551877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.205 [2024-11-19 10:55:01.551884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.551892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.205 [2024-11-19 10:55:01.551898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.551906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.205 [2024-11-19 10:55:01.551912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.551921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.205 [2024-11-19 10:55:01.551928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.551936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95fcf0 is same with the state(6) to be set 00:27:54.205 [2024-11-19 10:55:01.551945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:54.205 [2024-11-19 10:55:01.551957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:54.205 [2024-11-19 10:55:01.551963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:27:54.205 [2024-11-19 10:55:01.551972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.552053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.205 [2024-11-19 10:55:01.552063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.552071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.205 [2024-11-19 10:55:01.552077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.552084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.205 [2024-11-19 10:55:01.552090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.552098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.205 [2024-11-19 10:55:01.552107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.205 [2024-11-19 10:55:01.552115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.205 [2024-11-19 10:55:01.554943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.205 [2024-11-19 10:55:01.554975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.205 [2024-11-19 10:55:01.555568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.205 [2024-11-19 10:55:01.555586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.205 [2024-11-19 10:55:01.555595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.205 [2024-11-19 10:55:01.555774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.205 [2024-11-19 10:55:01.555959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.205 [2024-11-19 10:55:01.555968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.205 [2024-11-19 10:55:01.555976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.205 [2024-11-19 10:55:01.555985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.205 [2024-11-19 10:55:01.568300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.205 [2024-11-19 10:55:01.568609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.205 [2024-11-19 10:55:01.568629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.205 [2024-11-19 10:55:01.568638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.205 [2024-11-19 10:55:01.568812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.205 [2024-11-19 10:55:01.568994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.205 [2024-11-19 10:55:01.569005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.205 [2024-11-19 10:55:01.569012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.205 [2024-11-19 10:55:01.569020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.205 [2024-11-19 10:55:01.581197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.205 [2024-11-19 10:55:01.581600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.205 [2024-11-19 10:55:01.581618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.205 [2024-11-19 10:55:01.581626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.205 [2024-11-19 10:55:01.581790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.205 [2024-11-19 10:55:01.581963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.205 [2024-11-19 10:55:01.581973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.205 [2024-11-19 10:55:01.581979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.205 [2024-11-19 10:55:01.581986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.205 [2024-11-19 10:55:01.594262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.205 [2024-11-19 10:55:01.594623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.205 [2024-11-19 10:55:01.594642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.205 [2024-11-19 10:55:01.594650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.205 [2024-11-19 10:55:01.594815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.205 [2024-11-19 10:55:01.594988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.205 [2024-11-19 10:55:01.594998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.205 [2024-11-19 10:55:01.595005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.205 [2024-11-19 10:55:01.595012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.205 [2024-11-19 10:55:01.607082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.205 [2024-11-19 10:55:01.607495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.205 [2024-11-19 10:55:01.607541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.205 [2024-11-19 10:55:01.607565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.205 [2024-11-19 10:55:01.608162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.205 [2024-11-19 10:55:01.608328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.205 [2024-11-19 10:55:01.608337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.205 [2024-11-19 10:55:01.608344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.205 [2024-11-19 10:55:01.608351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.205 [2024-11-19 10:55:01.620020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.205 [2024-11-19 10:55:01.620454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.205 [2024-11-19 10:55:01.620500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.205 [2024-11-19 10:55:01.620524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.205 [2024-11-19 10:55:01.621117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.205 [2024-11-19 10:55:01.621587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.205 [2024-11-19 10:55:01.621597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.205 [2024-11-19 10:55:01.621605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.205 [2024-11-19 10:55:01.621612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.205 [2024-11-19 10:55:01.632825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.205 [2024-11-19 10:55:01.633247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.206 [2024-11-19 10:55:01.633268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.206 [2024-11-19 10:55:01.633276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.206 [2024-11-19 10:55:01.633439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.206 [2024-11-19 10:55:01.633603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.206 [2024-11-19 10:55:01.633612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.206 [2024-11-19 10:55:01.633619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.206 [2024-11-19 10:55:01.633625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.206 [2024-11-19 10:55:01.645978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.206 [2024-11-19 10:55:01.646334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.206 [2024-11-19 10:55:01.646352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.206 [2024-11-19 10:55:01.646360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.206 [2024-11-19 10:55:01.646549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.206 [2024-11-19 10:55:01.646723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.206 [2024-11-19 10:55:01.646733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.206 [2024-11-19 10:55:01.646740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.206 [2024-11-19 10:55:01.646746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.466 [2024-11-19 10:55:01.659063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.466 [2024-11-19 10:55:01.659441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.466 [2024-11-19 10:55:01.659459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.466 [2024-11-19 10:55:01.659467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.466 [2024-11-19 10:55:01.659640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.466 [2024-11-19 10:55:01.659812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.466 [2024-11-19 10:55:01.659822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.466 [2024-11-19 10:55:01.659828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.466 [2024-11-19 10:55:01.659835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.466 [2024-11-19 10:55:01.671971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.466 [2024-11-19 10:55:01.672345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.466 [2024-11-19 10:55:01.672362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.466 [2024-11-19 10:55:01.672370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.466 [2024-11-19 10:55:01.672538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.466 [2024-11-19 10:55:01.672708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.466 [2024-11-19 10:55:01.672718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.466 [2024-11-19 10:55:01.672725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.466 [2024-11-19 10:55:01.672731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.466 [2024-11-19 10:55:01.684842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.466 [2024-11-19 10:55:01.685267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.685323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.685348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.685929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.686526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.686554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.686574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.686594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.697768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.698113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.698131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.698139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.698311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.698483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.698493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.698499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.698506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.710618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.711043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.711062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.711070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.711232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.711396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.711405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.711416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.711423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.723581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.723928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.723946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.723960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.724124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.724287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.724297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.724303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.724310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.736531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.736955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.736972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.736980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.737144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.737308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.737317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.737324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.737330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.749441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.749795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.749813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.749821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.749992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.750157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.750167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.750173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.750180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.762429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.762866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.762883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.762891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.763071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.763244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.763254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.763261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.763267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.775304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.775721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.775763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.775789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.776382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.776896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.776906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.776912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.776919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.788217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.788627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.788644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.788652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.788815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.788986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.788997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.789003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.789010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.467 [2024-11-19 10:55:01.801062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.467 [2024-11-19 10:55:01.801479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.467 [2024-11-19 10:55:01.801501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.467 [2024-11-19 10:55:01.801510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.467 [2024-11-19 10:55:01.801682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.467 [2024-11-19 10:55:01.801855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.467 [2024-11-19 10:55:01.801865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.467 [2024-11-19 10:55:01.801872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.467 [2024-11-19 10:55:01.801879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.468 [2024-11-19 10:55:01.814211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.468 [2024-11-19 10:55:01.814639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.468 [2024-11-19 10:55:01.814658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.468 [2024-11-19 10:55:01.814666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.468 [2024-11-19 10:55:01.814844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.468 [2024-11-19 10:55:01.815028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.468 [2024-11-19 10:55:01.815039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.468 [2024-11-19 10:55:01.815046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.468 [2024-11-19 10:55:01.815053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.468 [2024-11-19 10:55:01.827365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.468 [2024-11-19 10:55:01.827773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.468 [2024-11-19 10:55:01.827792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.468 [2024-11-19 10:55:01.827800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.468 [2024-11-19 10:55:01.827983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.468 [2024-11-19 10:55:01.828163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.468 [2024-11-19 10:55:01.828173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.468 [2024-11-19 10:55:01.828180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.468 [2024-11-19 10:55:01.828187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.468 [2024-11-19 10:55:01.840518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.468 [2024-11-19 10:55:01.840942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.468 [2024-11-19 10:55:01.840966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.468 [2024-11-19 10:55:01.840974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.468 [2024-11-19 10:55:01.841158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.468 [2024-11-19 10:55:01.841326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.468 [2024-11-19 10:55:01.841336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.468 [2024-11-19 10:55:01.841342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.468 [2024-11-19 10:55:01.841349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.468 [2024-11-19 10:55:01.853359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.468 [2024-11-19 10:55:01.853772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.468 [2024-11-19 10:55:01.853790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.468 [2024-11-19 10:55:01.853798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.468 [2024-11-19 10:55:01.853969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.468 [2024-11-19 10:55:01.854135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.468 [2024-11-19 10:55:01.854145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.468 [2024-11-19 10:55:01.854151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.468 [2024-11-19 10:55:01.854158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.468 [2024-11-19 10:55:01.866155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.468 [2024-11-19 10:55:01.866497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.468 [2024-11-19 10:55:01.866515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.468 [2024-11-19 10:55:01.866523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.468 [2024-11-19 10:55:01.866686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.468 [2024-11-19 10:55:01.866850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.468 [2024-11-19 10:55:01.866860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.468 [2024-11-19 10:55:01.866866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.468 [2024-11-19 10:55:01.866872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.468 [2024-11-19 10:55:01.879000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.468 [2024-11-19 10:55:01.879435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.468 [2024-11-19 10:55:01.879453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.468 [2024-11-19 10:55:01.879461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.468 [2024-11-19 10:55:01.879624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.468 [2024-11-19 10:55:01.879787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.468 [2024-11-19 10:55:01.879797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.468 [2024-11-19 10:55:01.879807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.468 [2024-11-19 10:55:01.879814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.468 [2024-11-19 10:55:01.891880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.468 [2024-11-19 10:55:01.892284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.468 [2024-11-19 10:55:01.892330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.468 [2024-11-19 10:55:01.892353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.468 [2024-11-19 10:55:01.892933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.468 [2024-11-19 10:55:01.893336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.468 [2024-11-19 10:55:01.893345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.468 [2024-11-19 10:55:01.893352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.468 [2024-11-19 10:55:01.893359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.468 [2024-11-19 10:55:01.904805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.468 [2024-11-19 10:55:01.905232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.468 [2024-11-19 10:55:01.905249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.468 [2024-11-19 10:55:01.905256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.468 [2024-11-19 10:55:01.905419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.468 [2024-11-19 10:55:01.905583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.468 [2024-11-19 10:55:01.905592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.468 [2024-11-19 10:55:01.905599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.468 [2024-11-19 10:55:01.905606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.728 [2024-11-19 10:55:01.917901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.728 [2024-11-19 10:55:01.918297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:01.918315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:01.918322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:01.918485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:01.918649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:01.918658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:01.918665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:01.918671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 [2024-11-19 10:55:01.930733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:01.931068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:01.931086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:01.931094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:01.931257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:01.931421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:01.931430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:01.931436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:01.931443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 [2024-11-19 10:55:01.943663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:01.944066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:01.944085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:01.944093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:01.944269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:01.944433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:01.944443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:01.944449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:01.944456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 [2024-11-19 10:55:01.956525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:01.956942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:01.956965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:01.956973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:01.957136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:01.957300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:01.957310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:01.957317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:01.957323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 [2024-11-19 10:55:01.969371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:01.969803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:01.969849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:01.969880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:01.970408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:01.970572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:01.970582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:01.970589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:01.970595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 9461.00 IOPS, 36.96 MiB/s [2024-11-19T09:55:02.178Z] [2024-11-19 10:55:01.983342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:01.983768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:01.983814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:01.983838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:01.984327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:01.984492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:01.984500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:01.984507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:01.984513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 [2024-11-19 10:55:01.996209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:01.996628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:01.996646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:01.996654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:01.996816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:01.996987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:01.996997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:01.997004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:01.997011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 [2024-11-19 10:55:02.009003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:02.009429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:02.009485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:02.009509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:02.010104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:02.010324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:02.010334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:02.010340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:02.010347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 [2024-11-19 10:55:02.021907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:02.022351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:02.022397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:02.022422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:02.023016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:02.023517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:02.023527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:02.023533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.729 [2024-11-19 10:55:02.023540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.729 [2024-11-19 10:55:02.034826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.729 [2024-11-19 10:55:02.035250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-11-19 10:55:02.035295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.729 [2024-11-19 10:55:02.035320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.729 [2024-11-19 10:55:02.035899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.729 [2024-11-19 10:55:02.036440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.729 [2024-11-19 10:55:02.036450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.729 [2024-11-19 10:55:02.036457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.036463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.047621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.048018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.048065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.048089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.048535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.048699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.048708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.048720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.048726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.060480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.060823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.060841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.060849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.061030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.061204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.061214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.061222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.061228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.073541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.073884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.073902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.073912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.074092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.074272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.074282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.074289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.074296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.086494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.086944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.087004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.087029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.087499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.087673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.087683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.087692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.087700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.101460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.101912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.101970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.101994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.102540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.102795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.102809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.102819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.102829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.114424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.114824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.114841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.114849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.115023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.115193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.115203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.115209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.115216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.127275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.127707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.127753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.127776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.128373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.128794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.128804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.128810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.128816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.140109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.140529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.140547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.140558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.140722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.140885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.140895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.140901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.140908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.152926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.153291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.153337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.153361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.153858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.154028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.154038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.730 [2024-11-19 10:55:02.154045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.730 [2024-11-19 10:55:02.154052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.730 [2024-11-19 10:55:02.165832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.730 [2024-11-19 10:55:02.166234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-11-19 10:55:02.166251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.730 [2024-11-19 10:55:02.166259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.730 [2024-11-19 10:55:02.166423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.730 [2024-11-19 10:55:02.166587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.730 [2024-11-19 10:55:02.166596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.731 [2024-11-19 10:55:02.166604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.731 [2024-11-19 10:55:02.166610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.178901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.179301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.179319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.179326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.179490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.179657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.179667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.991 [2024-11-19 10:55:02.179674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.991 [2024-11-19 10:55:02.179680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.191700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.192105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.192123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.192131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.192295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.192458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.192468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.991 [2024-11-19 10:55:02.192474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.991 [2024-11-19 10:55:02.192481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.204558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.204975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.205025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.205049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.205609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.205774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.205783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.991 [2024-11-19 10:55:02.205805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.991 [2024-11-19 10:55:02.205813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.217657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.218110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.218130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.218138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.218316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.218494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.218504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.991 [2024-11-19 10:55:02.218515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.991 [2024-11-19 10:55:02.218523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.230528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.230965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.231012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.231035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.231613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.232024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.232034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.991 [2024-11-19 10:55:02.232040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.991 [2024-11-19 10:55:02.232047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.243457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.243803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.243819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.243827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.244015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.244188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.244198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.991 [2024-11-19 10:55:02.244205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.991 [2024-11-19 10:55:02.244212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.256387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.256816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.256833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.256841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.257010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.257175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.257184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.991 [2024-11-19 10:55:02.257191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.991 [2024-11-19 10:55:02.257198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.269480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.269907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.269925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.269933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.270110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.270284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.270294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.991 [2024-11-19 10:55:02.270303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.991 [2024-11-19 10:55:02.270310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.991 [2024-11-19 10:55:02.282446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.991 [2024-11-19 10:55:02.282867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.991 [2024-11-19 10:55:02.282914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.991 [2024-11-19 10:55:02.282937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.991 [2024-11-19 10:55:02.283526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.991 [2024-11-19 10:55:02.284105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.991 [2024-11-19 10:55:02.284115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.284122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.284129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.295372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.295782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.295800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.295807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.295977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.296142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.296151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.296158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.296164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.308241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.308668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.308714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.308746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.309262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.309427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.309437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.309443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.309449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.321168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.321533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.321551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.321559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.321732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.321905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.321915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.321922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.321929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.334292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.334692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.334710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.334718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.334895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.335081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.335092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.335099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.335106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.347459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.347742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.347761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.347769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.347954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.348136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.348146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.348154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.348161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.360432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.360793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.360811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.360819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.360999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.361173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.361183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.361190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.361197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.373370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.373824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.373870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.373894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.374413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.374588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.374598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.374605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.374611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.386252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.386529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.386547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.386554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.386717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.386880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.386889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.386901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.386908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.399064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.399419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.399437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.399445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.399607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.992 [2024-11-19 10:55:02.399771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.992 [2024-11-19 10:55:02.399780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.992 [2024-11-19 10:55:02.399787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.992 [2024-11-19 10:55:02.399793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.992 [2024-11-19 10:55:02.411873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.992 [2024-11-19 10:55:02.412265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.992 [2024-11-19 10:55:02.412282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.992 [2024-11-19 10:55:02.412290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.992 [2024-11-19 10:55:02.412452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.993 [2024-11-19 10:55:02.412617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.993 [2024-11-19 10:55:02.412626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.993 [2024-11-19 10:55:02.412633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.993 [2024-11-19 10:55:02.412639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.993 [2024-11-19 10:55:02.424797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.993 [2024-11-19 10:55:02.425200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.993 [2024-11-19 10:55:02.425217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.993 [2024-11-19 10:55:02.425225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.993 [2024-11-19 10:55:02.425388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.993 [2024-11-19 10:55:02.425552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.993 [2024-11-19 10:55:02.425561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.993 [2024-11-19 10:55:02.425567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.993 [2024-11-19 10:55:02.425575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.993 [2024-11-19 10:55:02.437858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.993 [2024-11-19 10:55:02.438227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.993 [2024-11-19 10:55:02.438245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:54.993 [2024-11-19 10:55:02.438253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:54.993 [2024-11-19 10:55:02.438431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:54.993 [2024-11-19 10:55:02.438611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.993 [2024-11-19 10:55:02.438622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.993 [2024-11-19 10:55:02.438629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.993 [2024-11-19 10:55:02.438636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.253 [2024-11-19 10:55:02.450828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.253 [2024-11-19 10:55:02.451113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-19 10:55:02.451131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.253 [2024-11-19 10:55:02.451139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.253 [2024-11-19 10:55:02.451302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.253 [2024-11-19 10:55:02.451466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.253 [2024-11-19 10:55:02.451475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.253 [2024-11-19 10:55:02.451482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.253 [2024-11-19 10:55:02.451488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.253 [2024-11-19 10:55:02.463673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.253 [2024-11-19 10:55:02.464097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-19 10:55:02.464115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.253 [2024-11-19 10:55:02.464122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.253 [2024-11-19 10:55:02.464285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.253 [2024-11-19 10:55:02.464449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.253 [2024-11-19 10:55:02.464459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.253 [2024-11-19 10:55:02.464466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.253 [2024-11-19 10:55:02.464472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.253 [2024-11-19 10:55:02.476638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.253 [2024-11-19 10:55:02.477054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-19 10:55:02.477072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.253 [2024-11-19 10:55:02.477083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.253 [2024-11-19 10:55:02.477257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.253 [2024-11-19 10:55:02.477432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.253 [2024-11-19 10:55:02.477441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.253 [2024-11-19 10:55:02.477448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.253 [2024-11-19 10:55:02.477455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.253 [2024-11-19 10:55:02.489547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.253 [2024-11-19 10:55:02.489909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-19 10:55:02.489927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.253 [2024-11-19 10:55:02.489935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.253 [2024-11-19 10:55:02.490103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.253 [2024-11-19 10:55:02.490268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.253 [2024-11-19 10:55:02.490277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.253 [2024-11-19 10:55:02.490284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.253 [2024-11-19 10:55:02.490290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.253 [2024-11-19 10:55:02.502464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.253 [2024-11-19 10:55:02.502842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.502860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.502867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.503035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.503200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.503210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.503216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.503222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.515390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.515788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.515834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.515858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.516447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.516621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.516634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.516642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.516649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.528383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.528720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.528738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.528745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.528909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.529078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.529088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.529095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.529102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.541298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.541777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.541819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.541843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.542420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.542594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.542604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.542613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.542621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.556190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.556651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.556672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.556683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.556936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.557200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.557213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.557224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.557239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.569241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.569670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.569715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.569739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.570331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.570796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.570806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.570813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.570819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.582057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.582345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.582364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.582372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.582546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.582719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.582729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.582737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.582744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.595413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.595775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.595795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.595804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.595994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.596169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.596179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.596186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.596193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.608393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.608809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.608827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.608836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.609013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.609188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.609198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.609205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.609211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.254 [2024-11-19 10:55:02.621308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.254 [2024-11-19 10:55:02.621664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-19 10:55:02.621712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.254 [2024-11-19 10:55:02.621737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.254 [2024-11-19 10:55:02.622340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.254 [2024-11-19 10:55:02.622919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.254 [2024-11-19 10:55:02.622929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.254 [2024-11-19 10:55:02.622936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.254 [2024-11-19 10:55:02.622943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.255 [2024-11-19 10:55:02.634150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.255 [2024-11-19 10:55:02.634500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-19 10:55:02.634517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.255 [2024-11-19 10:55:02.634525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.255 [2024-11-19 10:55:02.634690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.255 [2024-11-19 10:55:02.634853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.255 [2024-11-19 10:55:02.634863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.255 [2024-11-19 10:55:02.634869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.255 [2024-11-19 10:55:02.634876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.255 [2024-11-19 10:55:02.647111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.255 [2024-11-19 10:55:02.647455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-19 10:55:02.647473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.255 [2024-11-19 10:55:02.647481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.255 [2024-11-19 10:55:02.647647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.255 [2024-11-19 10:55:02.647811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.255 [2024-11-19 10:55:02.647822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.255 [2024-11-19 10:55:02.647828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.255 [2024-11-19 10:55:02.647836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.255 [2024-11-19 10:55:02.660022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.255 [2024-11-19 10:55:02.660377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-19 10:55:02.660424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.255 [2024-11-19 10:55:02.660448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.255 [2024-11-19 10:55:02.660989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.255 [2024-11-19 10:55:02.661156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.255 [2024-11-19 10:55:02.661166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.255 [2024-11-19 10:55:02.661173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.255 [2024-11-19 10:55:02.661179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.255 [2024-11-19 10:55:02.672873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.255 [2024-11-19 10:55:02.673272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-19 10:55:02.673291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.255 [2024-11-19 10:55:02.673299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.255 [2024-11-19 10:55:02.673463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.255 [2024-11-19 10:55:02.673626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.255 [2024-11-19 10:55:02.673636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.255 [2024-11-19 10:55:02.673642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.255 [2024-11-19 10:55:02.673648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.255 [2024-11-19 10:55:02.685725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.255 [2024-11-19 10:55:02.686153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-19 10:55:02.686199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.255 [2024-11-19 10:55:02.686223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.255 [2024-11-19 10:55:02.686812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.255 [2024-11-19 10:55:02.686984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.255 [2024-11-19 10:55:02.687000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.255 [2024-11-19 10:55:02.687006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.255 [2024-11-19 10:55:02.687013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.255 [2024-11-19 10:55:02.698874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.255 [2024-11-19 10:55:02.699238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-19 10:55:02.699257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.255 [2024-11-19 10:55:02.699265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.255 [2024-11-19 10:55:02.699443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.255 [2024-11-19 10:55:02.699622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.255 [2024-11-19 10:55:02.699634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.255 [2024-11-19 10:55:02.699643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.255 [2024-11-19 10:55:02.699652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.515 [2024-11-19 10:55:02.711799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.515 [2024-11-19 10:55:02.712225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.515 [2024-11-19 10:55:02.712271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.515 [2024-11-19 10:55:02.712297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.515 [2024-11-19 10:55:02.712795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.515 [2024-11-19 10:55:02.712964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.515 [2024-11-19 10:55:02.712974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.515 [2024-11-19 10:55:02.712981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.712987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.724690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.725037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.725055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.725062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.725224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.725387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.516 [2024-11-19 10:55:02.725397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.516 [2024-11-19 10:55:02.725404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.725415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.737575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.737989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.738007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.738015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.738178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.738340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.516 [2024-11-19 10:55:02.738350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.516 [2024-11-19 10:55:02.738357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.738363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.750362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.750755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.750772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.750779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.750941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.751112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.516 [2024-11-19 10:55:02.751123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.516 [2024-11-19 10:55:02.751129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.751135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.763204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.763548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.763567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.763574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.763737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.763901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.516 [2024-11-19 10:55:02.763910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.516 [2024-11-19 10:55:02.763917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.763923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.776127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.776482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.776500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.776507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.776670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.776833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.516 [2024-11-19 10:55:02.776843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.516 [2024-11-19 10:55:02.776849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.776855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.789015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.789446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.789492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.789516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.790111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.790308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.516 [2024-11-19 10:55:02.790317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.516 [2024-11-19 10:55:02.790324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.790330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.801940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.802367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.802412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.802436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.803027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.803191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.516 [2024-11-19 10:55:02.803201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.516 [2024-11-19 10:55:02.803208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.803215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.814823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.815275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.815321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.815345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.815930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.816441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.516 [2024-11-19 10:55:02.816451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.516 [2024-11-19 10:55:02.816458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.516 [2024-11-19 10:55:02.816465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.516 [2024-11-19 10:55:02.827688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.516 [2024-11-19 10:55:02.828033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.516 [2024-11-19 10:55:02.828052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.516 [2024-11-19 10:55:02.828060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.516 [2024-11-19 10:55:02.828223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.516 [2024-11-19 10:55:02.828386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.828396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.828402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.828409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.840617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.841015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.841033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.841041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.841213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.841386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.841396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.841403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.841410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.853727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.854146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.854165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.854173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.854350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.854528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.854541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.854548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.854555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.866710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.867121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.867167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.867191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.867677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.867850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.867860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.867867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.867873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.879521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.879927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.879945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.879959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.880122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.880285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.880295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.880301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.880307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.892380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.892779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.892796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.892803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.892974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.893139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.893148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.893155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.893164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.905236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.905609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.905653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.905676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.906269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.906688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.906706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.906721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.906735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.920037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.920560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.920583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.920594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.920848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.921112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.921126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.921136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.921146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.933018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.933429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.933446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.933454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.933625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.933797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.517 [2024-11-19 10:55:02.933807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.517 [2024-11-19 10:55:02.933813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.517 [2024-11-19 10:55:02.933820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.517 [2024-11-19 10:55:02.945887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.517 [2024-11-19 10:55:02.946302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.517 [2024-11-19 10:55:02.946367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.517 [2024-11-19 10:55:02.946391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.517 [2024-11-19 10:55:02.946892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.517 [2024-11-19 10:55:02.947061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.518 [2024-11-19 10:55:02.947072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.518 [2024-11-19 10:55:02.947078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.518 [2024-11-19 10:55:02.947085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.518 [2024-11-19 10:55:02.958733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.518 [2024-11-19 10:55:02.959163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.518 [2024-11-19 10:55:02.959182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.518 [2024-11-19 10:55:02.959190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.518 [2024-11-19 10:55:02.959368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.518 [2024-11-19 10:55:02.959545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.518 [2024-11-19 10:55:02.959556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.518 [2024-11-19 10:55:02.959563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.518 [2024-11-19 10:55:02.959570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.779 [2024-11-19 10:55:02.971852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.779 [2024-11-19 10:55:02.972260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.779 [2024-11-19 10:55:02.972277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.779 [2024-11-19 10:55:02.972285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.779 [2024-11-19 10:55:02.972447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.779 [2024-11-19 10:55:02.972610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.779 [2024-11-19 10:55:02.972619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.779 [2024-11-19 10:55:02.972625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.779 [2024-11-19 10:55:02.972632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.779 7095.75 IOPS, 27.72 MiB/s [2024-11-19T09:55:03.228Z] [2024-11-19 10:55:02.985834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.779 [2024-11-19 10:55:02.986168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.779 [2024-11-19 10:55:02.986185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.779 [2024-11-19 10:55:02.986193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.779 [2024-11-19 10:55:02.986361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.779 [2024-11-19 10:55:02.986523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.779 [2024-11-19 10:55:02.986533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.779 [2024-11-19 10:55:02.986540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.779 [2024-11-19 10:55:02.986546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.779 [2024-11-19 10:55:02.998759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.779 [2024-11-19 10:55:02.999163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.779 [2024-11-19 10:55:02.999180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.779 [2024-11-19 10:55:02.999188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.779 [2024-11-19 10:55:02.999351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.779 [2024-11-19 10:55:02.999515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.779 [2024-11-19 10:55:02.999524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.779 [2024-11-19 10:55:02.999531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.779 [2024-11-19 10:55:02.999538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.779 [2024-11-19 10:55:03.011603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.779 [2024-11-19 10:55:03.011935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.779 [2024-11-19 10:55:03.011991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.779 [2024-11-19 10:55:03.012015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.779 [2024-11-19 10:55:03.012593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.779 [2024-11-19 10:55:03.013086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.779 [2024-11-19 10:55:03.013096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.779 [2024-11-19 10:55:03.013103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.779 [2024-11-19 10:55:03.013109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.779 [2024-11-19 10:55:03.024623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.779 [2024-11-19 10:55:03.025039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.779 [2024-11-19 10:55:03.025057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.779 [2024-11-19 10:55:03.025065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.779 [2024-11-19 10:55:03.025228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.779 [2024-11-19 10:55:03.025391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.025403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.025410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.025417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.037460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.037769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.037786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.037795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.037963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.038127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.038136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.038142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.038149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.050360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.050710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.050756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.050780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.051235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.051400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.051409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.051416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.051422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.063268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.063686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.063703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.063711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.063874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.064043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.064053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.064060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.064066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.076111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.076453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.076471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.076478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.076640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.076804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.076813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.076820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.076826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.088896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.089315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.089332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.089340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.089503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.089666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.089676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.089683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.089689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.101752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.102196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.102214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.102222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.102394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.102567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.102577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.102584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.102590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.114909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.115343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.115365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.115373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.115550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.115727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.115737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.115743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.115750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.127936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.128374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.128420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.128444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.129026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.129416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.129435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.129449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.129462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.142814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.143329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.780 [2024-11-19 10:55:03.143374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.780 [2024-11-19 10:55:03.143398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.780 [2024-11-19 10:55:03.143945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.780 [2024-11-19 10:55:03.144208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.780 [2024-11-19 10:55:03.144221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.780 [2024-11-19 10:55:03.144231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.780 [2024-11-19 10:55:03.144241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.780 [2024-11-19 10:55:03.155682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.780 [2024-11-19 10:55:03.156100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.781 [2024-11-19 10:55:03.156118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.781 [2024-11-19 10:55:03.156126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.781 [2024-11-19 10:55:03.156298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.781 [2024-11-19 10:55:03.156466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.781 [2024-11-19 10:55:03.156476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.781 [2024-11-19 10:55:03.156483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.781 [2024-11-19 10:55:03.156489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.781 [2024-11-19 10:55:03.168490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.781 [2024-11-19 10:55:03.168908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.781 [2024-11-19 10:55:03.168926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.781 [2024-11-19 10:55:03.168933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.781 [2024-11-19 10:55:03.169102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.781 [2024-11-19 10:55:03.169266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.781 [2024-11-19 10:55:03.169276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.781 [2024-11-19 10:55:03.169282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.781 [2024-11-19 10:55:03.169289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.781 [2024-11-19 10:55:03.181369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.781 [2024-11-19 10:55:03.181793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.781 [2024-11-19 10:55:03.181811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.781 [2024-11-19 10:55:03.181819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.781 [2024-11-19 10:55:03.181988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.781 [2024-11-19 10:55:03.182152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.781 [2024-11-19 10:55:03.182162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.781 [2024-11-19 10:55:03.182169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.781 [2024-11-19 10:55:03.182175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.781 [2024-11-19 10:55:03.194238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.781 [2024-11-19 10:55:03.194657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.781 [2024-11-19 10:55:03.194674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.781 [2024-11-19 10:55:03.194682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.781 [2024-11-19 10:55:03.194845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.781 [2024-11-19 10:55:03.195013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.781 [2024-11-19 10:55:03.195023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.781 [2024-11-19 10:55:03.195034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.781 [2024-11-19 10:55:03.195041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.781 [2024-11-19 10:55:03.207127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.781 [2024-11-19 10:55:03.207471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.781 [2024-11-19 10:55:03.207488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.781 [2024-11-19 10:55:03.207495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.781 [2024-11-19 10:55:03.207657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.781 [2024-11-19 10:55:03.207820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.781 [2024-11-19 10:55:03.207829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.781 [2024-11-19 10:55:03.207836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.781 [2024-11-19 10:55:03.207842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:55.781 [2024-11-19 10:55:03.219923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:55.781 [2024-11-19 10:55:03.220348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.781 [2024-11-19 10:55:03.220388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:55.781 [2024-11-19 10:55:03.220415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:55.781 [2024-11-19 10:55:03.221007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:55.781 [2024-11-19 10:55:03.221576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:55.781 [2024-11-19 10:55:03.221586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:55.781 [2024-11-19 10:55:03.221593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:55.781 [2024-11-19 10:55:03.221600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.232950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.233315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.233333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.233341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.233519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.233698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.233708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.233715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.233722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.246064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.246497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.246515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.246523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.246701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.246880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.246890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.246897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.246903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.258903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.259252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.259270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.259277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.259441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.259605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.259615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.259622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.259629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.271681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.272088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.272105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.272113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.272276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.272439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.272449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.272456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.272462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.284551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.284963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.284985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.284993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.285157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.285320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.285329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.285336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.285342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.297403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.297822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.297865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.297891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.298431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.298596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.298605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.298611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.298618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.310223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.310617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.310655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.310682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.311276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.311564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.311573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.311579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.311585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.325083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.325593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.325639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.325662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.326258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.326750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.326763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.326773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.326783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.338044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.338449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.338468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.042 [2024-11-19 10:55:03.338476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.042 [2024-11-19 10:55:03.338649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.042 [2024-11-19 10:55:03.338821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.042 [2024-11-19 10:55:03.338831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.042 [2024-11-19 10:55:03.338838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.042 [2024-11-19 10:55:03.338844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.042 [2024-11-19 10:55:03.350949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.042 [2024-11-19 10:55:03.351371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.042 [2024-11-19 10:55:03.351388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.351396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.351558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.351721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.351730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.351736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.351743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.364102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.364512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.364531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.364539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.364717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.364896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.364906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.364917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.364924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.376978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.377402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.377454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.377478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.378027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.378207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.378218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.378224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.378231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.389845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.390260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.390278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.390285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.390448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.390611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.390621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.390627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.390633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.402688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.403105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.403123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.403131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.403294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.403457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.403466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.403472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.403480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.415542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.415937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.415959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.415967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.416130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.416293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.416302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.416309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.416315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.428451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.428861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.428878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.428886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.429055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.429218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.429229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.429235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.429241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.441242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.441660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.441677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.441685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.441848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.442016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.442027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.442034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.442040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.454167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.454495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.454512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.454523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.454684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.454847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.454857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.454863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.454870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.467078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.467495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.043 [2024-11-19 10:55:03.467512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.043 [2024-11-19 10:55:03.467520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.043 [2024-11-19 10:55:03.467683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.043 [2024-11-19 10:55:03.467846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.043 [2024-11-19 10:55:03.467856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.043 [2024-11-19 10:55:03.467862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.043 [2024-11-19 10:55:03.467869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.043 [2024-11-19 10:55:03.479930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.043 [2024-11-19 10:55:03.480277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.044 [2024-11-19 10:55:03.480295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.044 [2024-11-19 10:55:03.480303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.044 [2024-11-19 10:55:03.480466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.044 [2024-11-19 10:55:03.480629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.044 [2024-11-19 10:55:03.480638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.044 [2024-11-19 10:55:03.480644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.044 [2024-11-19 10:55:03.480651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.304 [2024-11-19 10:55:03.492910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.304 [2024-11-19 10:55:03.493327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.304 [2024-11-19 10:55:03.493345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.304 [2024-11-19 10:55:03.493352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.304 [2024-11-19 10:55:03.493515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.304 [2024-11-19 10:55:03.493682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.304 [2024-11-19 10:55:03.493692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.304 [2024-11-19 10:55:03.493697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.304 [2024-11-19 10:55:03.493703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.304 [2024-11-19 10:55:03.505761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.304 [2024-11-19 10:55:03.506186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.304 [2024-11-19 10:55:03.506204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.304 [2024-11-19 10:55:03.506212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.304 [2024-11-19 10:55:03.506374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.304 [2024-11-19 10:55:03.506537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.304 [2024-11-19 10:55:03.506546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.304 [2024-11-19 10:55:03.506552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.304 [2024-11-19 10:55:03.506559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.304 [2024-11-19 10:55:03.518615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.304 [2024-11-19 10:55:03.518934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.304 [2024-11-19 10:55:03.518991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.304 [2024-11-19 10:55:03.519015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.304 [2024-11-19 10:55:03.519592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.304 [2024-11-19 10:55:03.520179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.304 [2024-11-19 10:55:03.520190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.304 [2024-11-19 10:55:03.520196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.304 [2024-11-19 10:55:03.520203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.304 [2024-11-19 10:55:03.531507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.304 [2024-11-19 10:55:03.531850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.304 [2024-11-19 10:55:03.531867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.304 [2024-11-19 10:55:03.531874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.304 [2024-11-19 10:55:03.532041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.304 [2024-11-19 10:55:03.532206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.304 [2024-11-19 10:55:03.532215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.304 [2024-11-19 10:55:03.532225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.304 [2024-11-19 10:55:03.532233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.304 [2024-11-19 10:55:03.544313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.304 [2024-11-19 10:55:03.544728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.304 [2024-11-19 10:55:03.544745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.304 [2024-11-19 10:55:03.544753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.304 [2024-11-19 10:55:03.544916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.304 [2024-11-19 10:55:03.545086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.304 [2024-11-19 10:55:03.545096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.304 [2024-11-19 10:55:03.545103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.304 [2024-11-19 10:55:03.545109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.304 [2024-11-19 10:55:03.557098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.304 [2024-11-19 10:55:03.557492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.304 [2024-11-19 10:55:03.557508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.304 [2024-11-19 10:55:03.557515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.304 [2024-11-19 10:55:03.557679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.304 [2024-11-19 10:55:03.557842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.557851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.557857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.557863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.569922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.570326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.570343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.570351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.570514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.570679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.570688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.570694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.570701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.582778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.583196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.583238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.583263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.583841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.584164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.584174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.584180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.584186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.598266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.598778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.598827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.598852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.599349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.599606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.599619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.599629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.599639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.611291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.611601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.611619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.611627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.611800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.611979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.611990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.611997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.612005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.624470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.624838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.624856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.624868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.625051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.625240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.625250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.625257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.625264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.637427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.637705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.637723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.637730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.637893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.638081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.638092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.638099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.638107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.650535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.650922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.650940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.650953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.651130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.651307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.651318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.651324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.651331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.663586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.663990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.664009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.664016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.664189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.664366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.664376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.664383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.664390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.676678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.676973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.676991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.305 [2024-11-19 10:55:03.676999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.305 [2024-11-19 10:55:03.677181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.305 [2024-11-19 10:55:03.677345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.305 [2024-11-19 10:55:03.677355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.305 [2024-11-19 10:55:03.677361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.305 [2024-11-19 10:55:03.677368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.305 [2024-11-19 10:55:03.689640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.305 [2024-11-19 10:55:03.689994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.305 [2024-11-19 10:55:03.690013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.306 [2024-11-19 10:55:03.690021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.306 [2024-11-19 10:55:03.690193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.306 [2024-11-19 10:55:03.690367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.306 [2024-11-19 10:55:03.690377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.306 [2024-11-19 10:55:03.690384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.306 [2024-11-19 10:55:03.690391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.306 [2024-11-19 10:55:03.702706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.306 [2024-11-19 10:55:03.703617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.306 [2024-11-19 10:55:03.703643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.306 [2024-11-19 10:55:03.703652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.306 [2024-11-19 10:55:03.703833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.306 [2024-11-19 10:55:03.704017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.306 [2024-11-19 10:55:03.704028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.306 [2024-11-19 10:55:03.704040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.306 [2024-11-19 10:55:03.704048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.306 [2024-11-19 10:55:03.715866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.306 [2024-11-19 10:55:03.716269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.306 [2024-11-19 10:55:03.716289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.306 [2024-11-19 10:55:03.716298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.306 [2024-11-19 10:55:03.716476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.306 [2024-11-19 10:55:03.716656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.306 [2024-11-19 10:55:03.716666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.306 [2024-11-19 10:55:03.716675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.306 [2024-11-19 10:55:03.716683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.306 [2024-11-19 10:55:03.729209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.306 [2024-11-19 10:55:03.729608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.306 [2024-11-19 10:55:03.729627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.306 [2024-11-19 10:55:03.729635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.306 [2024-11-19 10:55:03.729813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.306 [2024-11-19 10:55:03.729997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.306 [2024-11-19 10:55:03.730008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.306 [2024-11-19 10:55:03.730016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.306 [2024-11-19 10:55:03.730023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.306 [2024-11-19 10:55:03.742354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.306 [2024-11-19 10:55:03.742785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.306 [2024-11-19 10:55:03.742831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.306 [2024-11-19 10:55:03.742856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.306 [2024-11-19 10:55:03.743428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.306 [2024-11-19 10:55:03.743607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.306 [2024-11-19 10:55:03.743618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.306 [2024-11-19 10:55:03.743624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.306 [2024-11-19 10:55:03.743632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.755461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.567 [2024-11-19 10:55:03.755838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-11-19 10:55:03.755855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.567 [2024-11-19 10:55:03.755864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.567 [2024-11-19 10:55:03.756047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.567 [2024-11-19 10:55:03.756226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.567 [2024-11-19 10:55:03.756236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.567 [2024-11-19 10:55:03.756242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.567 [2024-11-19 10:55:03.756250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.768501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.567 [2024-11-19 10:55:03.768936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-11-19 10:55:03.769006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.567 [2024-11-19 10:55:03.769030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.567 [2024-11-19 10:55:03.769548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.567 [2024-11-19 10:55:03.769722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.567 [2024-11-19 10:55:03.769732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.567 [2024-11-19 10:55:03.769739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.567 [2024-11-19 10:55:03.769746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.781410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.567 [2024-11-19 10:55:03.781832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-11-19 10:55:03.781882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.567 [2024-11-19 10:55:03.781906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.567 [2024-11-19 10:55:03.782497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.567 [2024-11-19 10:55:03.782689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.567 [2024-11-19 10:55:03.782699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.567 [2024-11-19 10:55:03.782705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.567 [2024-11-19 10:55:03.782713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.794363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.567 [2024-11-19 10:55:03.794689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-11-19 10:55:03.794706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.567 [2024-11-19 10:55:03.794717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.567 [2024-11-19 10:55:03.794882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.567 [2024-11-19 10:55:03.795050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.567 [2024-11-19 10:55:03.795061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.567 [2024-11-19 10:55:03.795067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.567 [2024-11-19 10:55:03.795073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.807282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.567 [2024-11-19 10:55:03.807690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-11-19 10:55:03.807735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.567 [2024-11-19 10:55:03.807758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.567 [2024-11-19 10:55:03.808353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.567 [2024-11-19 10:55:03.808921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.567 [2024-11-19 10:55:03.808931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.567 [2024-11-19 10:55:03.808938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.567 [2024-11-19 10:55:03.808945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.820292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.567 [2024-11-19 10:55:03.820677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-11-19 10:55:03.820722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.567 [2024-11-19 10:55:03.820746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.567 [2024-11-19 10:55:03.821338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.567 [2024-11-19 10:55:03.821921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.567 [2024-11-19 10:55:03.821958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.567 [2024-11-19 10:55:03.821966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.567 [2024-11-19 10:55:03.821973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.833268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.567 [2024-11-19 10:55:03.833611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-11-19 10:55:03.833629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.567 [2024-11-19 10:55:03.833637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.567 [2024-11-19 10:55:03.833809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.567 [2024-11-19 10:55:03.833992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.567 [2024-11-19 10:55:03.834002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.567 [2024-11-19 10:55:03.834009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.567 [2024-11-19 10:55:03.834016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.846177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.567 [2024-11-19 10:55:03.846529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-11-19 10:55:03.846547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.567 [2024-11-19 10:55:03.846555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.567 [2024-11-19 10:55:03.846728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.567 [2024-11-19 10:55:03.846901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.567 [2024-11-19 10:55:03.846911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.567 [2024-11-19 10:55:03.846917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.567 [2024-11-19 10:55:03.846924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.567 [2024-11-19 10:55:03.859209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.859635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.859674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.859699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.568 [2024-11-19 10:55:03.860291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.568 [2024-11-19 10:55:03.860876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.568 [2024-11-19 10:55:03.860904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.568 [2024-11-19 10:55:03.860923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.568 [2024-11-19 10:55:03.860943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.568 [2024-11-19 10:55:03.872242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.872661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.872680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.872688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.568 [2024-11-19 10:55:03.872865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.568 [2024-11-19 10:55:03.873049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.568 [2024-11-19 10:55:03.873060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.568 [2024-11-19 10:55:03.873067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.568 [2024-11-19 10:55:03.873078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.568 [2024-11-19 10:55:03.885408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.885772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.885791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.885799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.568 [2024-11-19 10:55:03.885981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.568 [2024-11-19 10:55:03.886160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.568 [2024-11-19 10:55:03.886171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.568 [2024-11-19 10:55:03.886177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.568 [2024-11-19 10:55:03.886185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.568 [2024-11-19 10:55:03.898445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.898871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.898889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.898897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.568 [2024-11-19 10:55:03.899076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.568 [2024-11-19 10:55:03.899250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.568 [2024-11-19 10:55:03.899260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.568 [2024-11-19 10:55:03.899267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.568 [2024-11-19 10:55:03.899274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.568 [2024-11-19 10:55:03.911330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.911783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.911828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.911852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.568 [2024-11-19 10:55:03.912446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.568 [2024-11-19 10:55:03.913043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.568 [2024-11-19 10:55:03.913071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.568 [2024-11-19 10:55:03.913092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.568 [2024-11-19 10:55:03.913111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.568 [2024-11-19 10:55:03.924369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.924720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.924737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.924745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.568 [2024-11-19 10:55:03.924917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.568 [2024-11-19 10:55:03.925098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.568 [2024-11-19 10:55:03.925108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.568 [2024-11-19 10:55:03.925115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.568 [2024-11-19 10:55:03.925121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.568 [2024-11-19 10:55:03.937393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.937744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.937762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.937770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.568 [2024-11-19 10:55:03.937941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.568 [2024-11-19 10:55:03.938121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.568 [2024-11-19 10:55:03.938132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.568 [2024-11-19 10:55:03.938138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.568 [2024-11-19 10:55:03.938146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.568 [2024-11-19 10:55:03.950420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.950833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.950851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.950859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.568 [2024-11-19 10:55:03.951037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.568 [2024-11-19 10:55:03.951210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.568 [2024-11-19 10:55:03.951220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.568 [2024-11-19 10:55:03.951227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.568 [2024-11-19 10:55:03.951234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.568 [2024-11-19 10:55:03.963306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.568 [2024-11-19 10:55:03.963682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-11-19 10:55:03.963699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.568 [2024-11-19 10:55:03.963707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.569 [2024-11-19 10:55:03.963873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.569 [2024-11-19 10:55:03.964064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.569 [2024-11-19 10:55:03.964075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.569 [2024-11-19 10:55:03.964081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.569 [2024-11-19 10:55:03.964089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.569 [2024-11-19 10:55:03.976401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.569 [2024-11-19 10:55:03.976797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-11-19 10:55:03.976842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.569 [2024-11-19 10:55:03.976866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.569 [2024-11-19 10:55:03.977335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.569 [2024-11-19 10:55:03.977509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.569 [2024-11-19 10:55:03.977519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.569 [2024-11-19 10:55:03.977526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.569 [2024-11-19 10:55:03.977533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.569 5676.60 IOPS, 22.17 MiB/s [2024-11-19T09:55:04.018Z] [2024-11-19 10:55:03.990563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.569 [2024-11-19 10:55:03.990997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-11-19 10:55:03.991017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.569 [2024-11-19 10:55:03.991025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.569 [2024-11-19 10:55:03.991197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.569 [2024-11-19 10:55:03.991370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.569 [2024-11-19 10:55:03.991381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.569 [2024-11-19 10:55:03.991388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.569 [2024-11-19 10:55:03.991396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.569 [2024-11-19 10:55:04.003461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.569 [2024-11-19 10:55:04.003815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-11-19 10:55:04.003861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.569 [2024-11-19 10:55:04.003885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.569 [2024-11-19 10:55:04.004349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.569 [2024-11-19 10:55:04.004527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.569 [2024-11-19 10:55:04.004537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.569 [2024-11-19 10:55:04.004544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.569 [2024-11-19 10:55:04.004551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.830 [2024-11-19 10:55:04.016676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.830 [2024-11-19 10:55:04.017072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.830 [2024-11-19 10:55:04.017090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.830 [2024-11-19 10:55:04.017099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.830 [2024-11-19 10:55:04.017280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.830 [2024-11-19 10:55:04.017443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.830 [2024-11-19 10:55:04.017453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.830 [2024-11-19 10:55:04.017459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.830 [2024-11-19 10:55:04.017466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.830 [2024-11-19 10:55:04.029663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.830 [2024-11-19 10:55:04.030043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.830 [2024-11-19 10:55:04.030061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.830 [2024-11-19 10:55:04.030069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.830 [2024-11-19 10:55:04.030233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.830 [2024-11-19 10:55:04.030396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.830 [2024-11-19 10:55:04.030406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.830 [2024-11-19 10:55:04.030412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.830 [2024-11-19 10:55:04.030418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.830 [2024-11-19 10:55:04.042595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.830 [2024-11-19 10:55:04.043015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.830 [2024-11-19 10:55:04.043061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.830 [2024-11-19 10:55:04.043084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.830 [2024-11-19 10:55:04.043537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.830 [2024-11-19 10:55:04.043701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.830 [2024-11-19 10:55:04.043711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.830 [2024-11-19 10:55:04.043717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.830 [2024-11-19 10:55:04.043727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.830 [2024-11-19 10:55:04.055660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.830 [2024-11-19 10:55:04.056083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.830 [2024-11-19 10:55:04.056101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.830 [2024-11-19 10:55:04.056110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.830 [2024-11-19 10:55:04.056286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.830 [2024-11-19 10:55:04.056450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.830 [2024-11-19 10:55:04.056460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.830 [2024-11-19 10:55:04.056467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.830 [2024-11-19 10:55:04.056473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.830 [2024-11-19 10:55:04.068525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.830 [2024-11-19 10:55:04.068940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.830 [2024-11-19 10:55:04.068963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.830 [2024-11-19 10:55:04.068971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.830 [2024-11-19 10:55:04.069135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.830 [2024-11-19 10:55:04.069298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.830 [2024-11-19 10:55:04.069307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.830 [2024-11-19 10:55:04.069313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.830 [2024-11-19 10:55:04.069320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.830 [2024-11-19 10:55:04.081391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.830 [2024-11-19 10:55:04.081812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.830 [2024-11-19 10:55:04.081831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.830 [2024-11-19 10:55:04.081838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.830 [2024-11-19 10:55:04.082008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.830 [2024-11-19 10:55:04.082174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.830 [2024-11-19 10:55:04.082183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.830 [2024-11-19 10:55:04.082190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.830 [2024-11-19 10:55:04.082196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.830 [2024-11-19 10:55:04.094239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.830 [2024-11-19 10:55:04.094657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.094673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.094680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.094843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.095030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.095040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.095047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.095054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.107163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.107576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.107620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.107645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.108240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.108463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.108472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.108479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.108485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.120090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.120515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.120532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.120540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.120711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.120883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.120893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.120899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.120906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.132899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.133331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.133349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.133357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.133534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.133707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.133717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.133724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.133730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.146096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.146529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.146548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.146556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.146735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.146914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.146924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.146931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.146938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.159094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.159538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.159556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.159564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.159737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.159908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.159919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.159925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.159932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.172002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.172342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.172360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.172368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.172531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.172695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.172708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.172714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.172722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.184808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.185236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.185254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.185261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.185424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.185588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.185597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.185604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.185611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.197708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.198123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.198141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.198149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.831 [2024-11-19 10:55:04.198311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.831 [2024-11-19 10:55:04.198474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.831 [2024-11-19 10:55:04.198483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.831 [2024-11-19 10:55:04.198490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.831 [2024-11-19 10:55:04.198496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.831 [2024-11-19 10:55:04.210584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.831 [2024-11-19 10:55:04.210980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.831 [2024-11-19 10:55:04.210998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.831 [2024-11-19 10:55:04.211006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.832 [2024-11-19 10:55:04.211169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.832 [2024-11-19 10:55:04.211333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.832 [2024-11-19 10:55:04.211342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.832 [2024-11-19 10:55:04.211349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.832 [2024-11-19 10:55:04.211358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.832 [2024-11-19 10:55:04.223458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.832 [2024-11-19 10:55:04.223898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.832 [2024-11-19 10:55:04.223942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.832 [2024-11-19 10:55:04.223983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.832 [2024-11-19 10:55:04.224562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.832 [2024-11-19 10:55:04.225024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.832 [2024-11-19 10:55:04.225035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.832 [2024-11-19 10:55:04.225041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.832 [2024-11-19 10:55:04.225048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.832 [2024-11-19 10:55:04.236299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.832 [2024-11-19 10:55:04.236720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.832 [2024-11-19 10:55:04.236765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.832 [2024-11-19 10:55:04.236788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.832 [2024-11-19 10:55:04.237382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.832 [2024-11-19 10:55:04.237990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.832 [2024-11-19 10:55:04.238001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.832 [2024-11-19 10:55:04.238008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.832 [2024-11-19 10:55:04.238014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.832 [2024-11-19 10:55:04.249150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.832 [2024-11-19 10:55:04.249565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.832 [2024-11-19 10:55:04.249581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.832 [2024-11-19 10:55:04.249589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.832 [2024-11-19 10:55:04.249752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.832 [2024-11-19 10:55:04.249916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.832 [2024-11-19 10:55:04.249926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.832 [2024-11-19 10:55:04.249932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.832 [2024-11-19 10:55:04.249939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.832 [2024-11-19 10:55:04.262116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.832 [2024-11-19 10:55:04.262553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.832 [2024-11-19 10:55:04.262600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.832 [2024-11-19 10:55:04.262624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.832 [2024-11-19 10:55:04.263219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.832 [2024-11-19 10:55:04.263805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.832 [2024-11-19 10:55:04.263832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.832 [2024-11-19 10:55:04.263853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.832 [2024-11-19 10:55:04.263873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:56.832 [2024-11-19 10:55:04.275145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:56.832 [2024-11-19 10:55:04.275545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.832 [2024-11-19 10:55:04.275579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:56.832 [2024-11-19 10:55:04.275587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:56.832 [2024-11-19 10:55:04.275764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:56.832 [2024-11-19 10:55:04.275941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:56.832 [2024-11-19 10:55:04.275958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:56.832 [2024-11-19 10:55:04.275967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:56.832 [2024-11-19 10:55:04.275975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.093 [2024-11-19 10:55:04.288089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.093 [2024-11-19 10:55:04.288491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-11-19 10:55:04.288509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.093 [2024-11-19 10:55:04.288518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.093 [2024-11-19 10:55:04.288690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.093 [2024-11-19 10:55:04.288862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.093 [2024-11-19 10:55:04.288872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.093 [2024-11-19 10:55:04.288879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.093 [2024-11-19 10:55:04.288887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.093 [2024-11-19 10:55:04.300958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.093 [2024-11-19 10:55:04.301363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-11-19 10:55:04.301380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.093 [2024-11-19 10:55:04.301388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.093 [2024-11-19 10:55:04.301557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.093 [2024-11-19 10:55:04.301720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.093 [2024-11-19 10:55:04.301730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.093 [2024-11-19 10:55:04.301736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.093 [2024-11-19 10:55:04.301743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.093 [2024-11-19 10:55:04.313773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.093 [2024-11-19 10:55:04.314142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-11-19 10:55:04.314187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.093 [2024-11-19 10:55:04.314211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.093 [2024-11-19 10:55:04.314785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.093 [2024-11-19 10:55:04.314954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.093 [2024-11-19 10:55:04.314963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.093 [2024-11-19 10:55:04.314969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.093 [2024-11-19 10:55:04.314976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.093 [2024-11-19 10:55:04.326770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.093 [2024-11-19 10:55:04.327183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-11-19 10:55:04.327229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.093 [2024-11-19 10:55:04.327254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.093 [2024-11-19 10:55:04.327774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.093 [2024-11-19 10:55:04.327938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.093 [2024-11-19 10:55:04.327953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.093 [2024-11-19 10:55:04.327959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.093 [2024-11-19 10:55:04.327966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.093 [2024-11-19 10:55:04.339691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.093 [2024-11-19 10:55:04.340030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-11-19 10:55:04.340048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.093 [2024-11-19 10:55:04.340056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.093 [2024-11-19 10:55:04.340219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.093 [2024-11-19 10:55:04.340382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.340395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.340401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.340407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.352509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.352860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.352913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.352940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.353373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.353538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.353548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.353555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.353562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.365335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.365746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.365763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.365771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.365933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.366126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.366145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.366153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.366161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.378206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.378616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.378655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.378681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.379238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.379403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.379411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.379417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.379427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.391030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.391457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.391475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.391483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.391656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.391829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.391839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.391846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.391854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.404215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.404652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.404671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.404679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.404858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.405046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.405057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.405064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.405072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.417258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.417684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.417703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.417711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.417884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.418065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.418076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.418082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.418089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.430079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.430471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.430492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.430500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.430663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.430826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.430835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.430842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.430849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.442873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.443273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.443290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.443297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.443459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.443622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.443632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.443639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.443646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.455727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.456143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.456161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.456169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.094 [2024-11-19 10:55:04.456333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.094 [2024-11-19 10:55:04.456496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.094 [2024-11-19 10:55:04.456506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.094 [2024-11-19 10:55:04.456512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.094 [2024-11-19 10:55:04.456519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.094 [2024-11-19 10:55:04.468551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.094 [2024-11-19 10:55:04.468973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-11-19 10:55:04.469019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.094 [2024-11-19 10:55:04.469043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.095 [2024-11-19 10:55:04.469460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.095 [2024-11-19 10:55:04.469624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.095 [2024-11-19 10:55:04.469634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.095 [2024-11-19 10:55:04.469640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.095 [2024-11-19 10:55:04.469646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.095 [2024-11-19 10:55:04.481478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.095 [2024-11-19 10:55:04.481836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-11-19 10:55:04.481853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.095 [2024-11-19 10:55:04.481860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.095 [2024-11-19 10:55:04.482048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.095 [2024-11-19 10:55:04.482227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.095 [2024-11-19 10:55:04.482237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.095 [2024-11-19 10:55:04.482244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.095 [2024-11-19 10:55:04.482251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.095 [2024-11-19 10:55:04.494315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.095 [2024-11-19 10:55:04.494727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-11-19 10:55:04.494744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.095 [2024-11-19 10:55:04.494752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.095 [2024-11-19 10:55:04.494914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.095 [2024-11-19 10:55:04.495107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.095 [2024-11-19 10:55:04.495117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.095 [2024-11-19 10:55:04.495124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.095 [2024-11-19 10:55:04.495131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.095 [2024-11-19 10:55:04.507201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.095 [2024-11-19 10:55:04.507628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-11-19 10:55:04.507672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.095 [2024-11-19 10:55:04.507697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.095 [2024-11-19 10:55:04.508290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.095 [2024-11-19 10:55:04.508744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.095 [2024-11-19 10:55:04.508757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.095 [2024-11-19 10:55:04.508764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.095 [2024-11-19 10:55:04.508771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.095 [2024-11-19 10:55:04.520038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.095 [2024-11-19 10:55:04.520451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-11-19 10:55:04.520492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.095 [2024-11-19 10:55:04.520518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.095 [2024-11-19 10:55:04.521111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.095 [2024-11-19 10:55:04.521597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.095 [2024-11-19 10:55:04.521607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.095 [2024-11-19 10:55:04.521613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.095 [2024-11-19 10:55:04.521620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.095 [2024-11-19 10:55:04.532922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.095 [2024-11-19 10:55:04.533266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-11-19 10:55:04.533284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.095 [2024-11-19 10:55:04.533291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.095 [2024-11-19 10:55:04.533454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.095 [2024-11-19 10:55:04.533617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.095 [2024-11-19 10:55:04.533626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.095 [2024-11-19 10:55:04.533633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.095 [2024-11-19 10:55:04.533639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1844393 Killed "${NVMF_APP[@]}" "$@" 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:57.356 [2024-11-19 10:55:04.546022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:57.356 [2024-11-19 10:55:04.546453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.356 [2024-11-19 10:55:04.546472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.356 [2024-11-19 10:55:04.546480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:57.356 [2024-11-19 10:55:04.546658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.356 [2024-11-19 10:55:04.546841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.356 [2024-11-19 10:55:04.546852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.356 [2024-11-19 10:55:04.546859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.356 [2024-11-19 10:55:04.546867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1845748 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1845748 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1845748 ']' 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:57.356 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.356 [2024-11-19 10:55:04.559208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.356 [2024-11-19 10:55:04.559634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.356 [2024-11-19 10:55:04.559652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.356 [2024-11-19 10:55:04.559659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.356 [2024-11-19 10:55:04.559837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.356 [2024-11-19 10:55:04.560023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.356 [2024-11-19 10:55:04.560034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.356 [2024-11-19 10:55:04.560041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.356 [2024-11-19 10:55:04.560049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.356 [2024-11-19 10:55:04.572394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.356 [2024-11-19 10:55:04.572754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.356 [2024-11-19 10:55:04.572772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.356 [2024-11-19 10:55:04.572781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.356 [2024-11-19 10:55:04.572965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.356 [2024-11-19 10:55:04.573144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.356 [2024-11-19 10:55:04.573154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.356 [2024-11-19 10:55:04.573161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.356 [2024-11-19 10:55:04.573172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.356 [2024-11-19 10:55:04.585396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.356 [2024-11-19 10:55:04.585807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.356 [2024-11-19 10:55:04.585824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.356 [2024-11-19 10:55:04.585832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.356 [2024-11-19 10:55:04.586012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.356 [2024-11-19 10:55:04.586186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.356 [2024-11-19 10:55:04.586197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.356 [2024-11-19 10:55:04.586204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.356 [2024-11-19 10:55:04.586210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.598354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.598792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.598810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.598819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.598999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.599173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.599183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.599189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.599196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.600538] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:27:57.357 [2024-11-19 10:55:04.600578] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.357 [2024-11-19 10:55:04.611363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.611703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.611721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.611730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.611903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.612080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.612090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.612101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.612108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.624354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.624784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.624802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.624811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.625006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.625183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.625193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.625201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.625209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.637374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.637775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.637793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.637801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.637979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.638151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.638161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.638168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.638176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.650457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.650902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.650920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.650928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.651112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.651291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.651300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.651308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.651315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.663501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.663925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.663943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.663957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.664135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.664314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.664325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.664331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.664338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.666422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:57.357 [2024-11-19 10:55:04.676675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.677031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.677060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.677069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.677244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.677418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.677429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.677436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.677443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.689714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.690053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.690078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.690086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.690259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.690433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.690444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.690451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.690460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.702722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.357 [2024-11-19 10:55:04.703131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.357 [2024-11-19 10:55:04.703150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.357 [2024-11-19 10:55:04.703163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.357 [2024-11-19 10:55:04.703337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.357 [2024-11-19 10:55:04.703511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.357 [2024-11-19 10:55:04.703521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.357 [2024-11-19 10:55:04.703528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.357 [2024-11-19 10:55:04.703534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.357 [2024-11-19 10:55:04.707773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.357 [2024-11-19 10:55:04.707799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.357 [2024-11-19 10:55:04.707806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.357 [2024-11-19 10:55:04.707812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.357 [2024-11-19 10:55:04.707817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.358 [2024-11-19 10:55:04.709207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.358 [2024-11-19 10:55:04.709313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.358 [2024-11-19 10:55:04.709313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.358 [2024-11-19 10:55:04.715854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.358 [2024-11-19 10:55:04.716249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.358 [2024-11-19 10:55:04.716270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.358 [2024-11-19 10:55:04.716279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.358 [2024-11-19 10:55:04.716460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.358 [2024-11-19 10:55:04.716639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.358 [2024-11-19 10:55:04.716650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.358 [2024-11-19 10:55:04.716657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.358 [2024-11-19 10:55:04.716665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.358 [2024-11-19 10:55:04.729019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.358 [2024-11-19 10:55:04.729425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.358 [2024-11-19 10:55:04.729446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.358 [2024-11-19 10:55:04.729456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.358 [2024-11-19 10:55:04.729636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.358 [2024-11-19 10:55:04.729817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.358 [2024-11-19 10:55:04.729827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.358 [2024-11-19 10:55:04.729841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.358 [2024-11-19 10:55:04.729849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.358 [2024-11-19 10:55:04.742181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.358 [2024-11-19 10:55:04.742560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.358 [2024-11-19 10:55:04.742581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.358 [2024-11-19 10:55:04.742590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.358 [2024-11-19 10:55:04.742770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.358 [2024-11-19 10:55:04.742955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.358 [2024-11-19 10:55:04.742966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.358 [2024-11-19 10:55:04.742975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.358 [2024-11-19 10:55:04.742985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.358 [2024-11-19 10:55:04.755331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.358 [2024-11-19 10:55:04.755675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.358 [2024-11-19 10:55:04.755695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.358 [2024-11-19 10:55:04.755704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.358 [2024-11-19 10:55:04.755884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.358 [2024-11-19 10:55:04.756071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.358 [2024-11-19 10:55:04.756081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.358 [2024-11-19 10:55:04.756089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.358 [2024-11-19 10:55:04.756098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.358 [2024-11-19 10:55:04.768439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.358 [2024-11-19 10:55:04.768750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.358 [2024-11-19 10:55:04.768771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.358 [2024-11-19 10:55:04.768780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.358 [2024-11-19 10:55:04.768965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.358 [2024-11-19 10:55:04.769147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.358 [2024-11-19 10:55:04.769157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.358 [2024-11-19 10:55:04.769164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.358 [2024-11-19 10:55:04.769172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.358 [2024-11-19 10:55:04.781508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.358 [2024-11-19 10:55:04.781954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.358 [2024-11-19 10:55:04.781973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.358 [2024-11-19 10:55:04.781981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.358 [2024-11-19 10:55:04.782160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.358 [2024-11-19 10:55:04.782338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.358 [2024-11-19 10:55:04.782348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.358 [2024-11-19 10:55:04.782355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.358 [2024-11-19 10:55:04.782362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.358 [2024-11-19 10:55:04.794695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.358 [2024-11-19 10:55:04.795118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.358 [2024-11-19 10:55:04.795136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.358 [2024-11-19 10:55:04.795145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.358 [2024-11-19 10:55:04.795323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.358 [2024-11-19 10:55:04.795502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.358 [2024-11-19 10:55:04.795513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.358 [2024-11-19 10:55:04.795519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.358 [2024-11-19 10:55:04.795526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.618 [2024-11-19 10:55:04.807830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.618 [2024-11-19 10:55:04.808275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.618 [2024-11-19 10:55:04.808293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.618 [2024-11-19 10:55:04.808302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.618 [2024-11-19 10:55:04.808480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.618 [2024-11-19 10:55:04.808658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.618 [2024-11-19 10:55:04.808669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.618 [2024-11-19 10:55:04.808676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.618 [2024-11-19 10:55:04.808683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.618 [2024-11-19 10:55:04.821004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.618 [2024-11-19 10:55:04.821441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.618 [2024-11-19 10:55:04.821459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.618 [2024-11-19 10:55:04.821467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.618 [2024-11-19 10:55:04.821645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.618 [2024-11-19 10:55:04.821824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.618 [2024-11-19 10:55:04.821835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.618 [2024-11-19 10:55:04.821843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.618 [2024-11-19 10:55:04.821852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.618 [2024-11-19 10:55:04.834168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.618 [2024-11-19 10:55:04.834512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.618 [2024-11-19 10:55:04.834530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.618 [2024-11-19 10:55:04.834538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.618 [2024-11-19 10:55:04.834716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.618 [2024-11-19 10:55:04.834896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.618 [2024-11-19 10:55:04.834907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.618 [2024-11-19 10:55:04.834914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.618 [2024-11-19 10:55:04.834922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.618 [2024-11-19 10:55:04.847257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.618 [2024-11-19 10:55:04.847625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.618 [2024-11-19 10:55:04.847643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.618 [2024-11-19 10:55:04.847651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.618 [2024-11-19 10:55:04.847830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.618 [2024-11-19 10:55:04.848015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.618 [2024-11-19 10:55:04.848026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.618 [2024-11-19 10:55:04.848036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.618 [2024-11-19 10:55:04.848045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.618 [2024-11-19 10:55:04.853645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.618 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.618 [2024-11-19 10:55:04.860361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.618 [2024-11-19 10:55:04.860689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.618 [2024-11-19 10:55:04.860708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.618 [2024-11-19 10:55:04.860716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.618 [2024-11-19 10:55:04.860894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.618 [2024-11-19 10:55:04.861076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.618 [2024-11-19 10:55:04.861087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.618 [2024-11-19 10:55:04.861094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.618 [2024-11-19 10:55:04.861101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.618 [2024-11-19 10:55:04.873412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.619 [2024-11-19 10:55:04.873851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.619 [2024-11-19 10:55:04.873870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.619 [2024-11-19 10:55:04.873879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.619 [2024-11-19 10:55:04.874062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.619 [2024-11-19 10:55:04.874240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.619 [2024-11-19 10:55:04.874250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.619 [2024-11-19 10:55:04.874258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.619 [2024-11-19 10:55:04.874265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.619 [2024-11-19 10:55:04.886614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.619 [2024-11-19 10:55:04.887049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.619 [2024-11-19 10:55:04.887068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.619 [2024-11-19 10:55:04.887077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.619 [2024-11-19 10:55:04.887256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.619 [2024-11-19 10:55:04.887435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.619 [2024-11-19 10:55:04.887445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.619 [2024-11-19 10:55:04.887457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.619 [2024-11-19 10:55:04.887465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.619 Malloc0 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.619 [2024-11-19 10:55:04.899778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.619 [2024-11-19 10:55:04.900144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.619 [2024-11-19 10:55:04.900162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.619 [2024-11-19 10:55:04.900170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.619 [2024-11-19 10:55:04.900347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.619 [2024-11-19 10:55:04.900526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.619 [2024-11-19 10:55:04.900536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.619 [2024-11-19 10:55:04.900544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.619 [2024-11-19 10:55:04.900552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.619 [2024-11-19 10:55:04.912870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.619 [2024-11-19 10:55:04.913277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.619 [2024-11-19 10:55:04.913295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x736500 with addr=10.0.0.2, port=4420 00:27:57.619 [2024-11-19 10:55:04.913304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736500 is same with the state(6) to be set 00:27:57.619 [2024-11-19 10:55:04.913482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736500 (9): Bad file descriptor 00:27:57.619 [2024-11-19 10:55:04.913658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:57.619 [2024-11-19 10:55:04.913669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:57.619 [2024-11-19 10:55:04.913676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:57.619 [2024-11-19 10:55:04.913682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:57.619 [2024-11-19 10:55:04.915040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.619 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1844826 00:27:57.619 [2024-11-19 10:55:04.925997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:57.619 [2024-11-19 10:55:04.961892] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:58.556 4768.83 IOPS, 18.63 MiB/s [2024-11-19T09:55:07.383Z] 5688.00 IOPS, 22.22 MiB/s [2024-11-19T09:55:08.317Z] 6364.00 IOPS, 24.86 MiB/s [2024-11-19T09:55:09.254Z] 6914.78 IOPS, 27.01 MiB/s [2024-11-19T09:55:10.189Z] 7317.20 IOPS, 28.58 MiB/s [2024-11-19T09:55:11.126Z] 7655.00 IOPS, 29.90 MiB/s [2024-11-19T09:55:12.064Z] 7937.08 IOPS, 31.00 MiB/s [2024-11-19T09:55:13.442Z] 8197.77 IOPS, 32.02 MiB/s [2024-11-19T09:55:14.379Z] 8410.93 IOPS, 32.86 MiB/s 00:28:06.930 Latency(us) 00:28:06.930 [2024-11-19T09:55:14.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.930 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:06.930 Verification LBA range: start 0x0 length 0x4000 00:28:06.930 Nvme1n1 : 15.01 8591.09 33.56 10892.75 0.00 6549.66 454.12 15614.66 00:28:06.930 [2024-11-19T09:55:14.379Z] =================================================================================================================== 00:28:06.930 [2024-11-19T09:55:14.379Z] Total : 8591.09 33.56 10892.75 0.00 6549.66 454.12 15614.66 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:06.930 rmmod nvme_tcp 00:28:06.930 rmmod nvme_fabrics 00:28:06.930 rmmod nvme_keyring 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1845748 ']' 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1845748 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1845748 ']' 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1845748 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1845748 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1845748' 00:28:06.930 killing process with pid 1845748 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1845748 00:28:06.930 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1845748 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.190 10:55:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.096 10:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.096 00:28:09.096 real 0m26.195s 00:28:09.096 user 1m1.232s 00:28:09.096 sys 0m6.772s 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.356 ************************************ 00:28:09.356 END TEST nvmf_bdevperf 00:28:09.356 ************************************ 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.356 ************************************ 00:28:09.356 START TEST nvmf_target_disconnect 00:28:09.356 ************************************ 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:09.356 * Looking for test storage... 00:28:09.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:09.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.356 --rc genhtml_branch_coverage=1 00:28:09.356 --rc genhtml_function_coverage=1 00:28:09.356 --rc genhtml_legend=1 00:28:09.356 --rc geninfo_all_blocks=1 00:28:09.356 --rc geninfo_unexecuted_blocks=1 00:28:09.356 00:28:09.356 ' 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:09.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.356 --rc genhtml_branch_coverage=1 00:28:09.356 --rc genhtml_function_coverage=1 00:28:09.356 --rc genhtml_legend=1 00:28:09.356 --rc geninfo_all_blocks=1 00:28:09.356 --rc geninfo_unexecuted_blocks=1 00:28:09.356 00:28:09.356 ' 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:09.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.356 --rc genhtml_branch_coverage=1 00:28:09.356 --rc genhtml_function_coverage=1 00:28:09.356 --rc genhtml_legend=1 00:28:09.356 --rc geninfo_all_blocks=1 00:28:09.356 --rc geninfo_unexecuted_blocks=1 00:28:09.356 00:28:09.356 ' 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:09.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.356 --rc genhtml_branch_coverage=1 00:28:09.356 --rc genhtml_function_coverage=1 00:28:09.356 --rc genhtml_legend=1 00:28:09.356 --rc geninfo_all_blocks=1 00:28:09.356 --rc geninfo_unexecuted_blocks=1 00:28:09.356 00:28:09.356 ' 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.356 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.357 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.357 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.357 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.357 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.357 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.357 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.357 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:09.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:09.616 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.617 10:55:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:16.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:16.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:16.189 Found net devices under 0000:86:00.0: cvl_0_0 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:16.189 Found net devices under 0000:86:00.1: cvl_0_1 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:16.189 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:16.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:28:16.190 00:28:16.190 --- 10.0.0.2 ping statistics --- 00:28:16.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.190 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:16.190 00:28:16.190 --- 10.0.0.1 ping statistics --- 00:28:16.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.190 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 ************************************ 00:28:16.190 START TEST nvmf_target_disconnect_tc1 00:28:16.190 ************************************ 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.190 [2024-11-19 10:55:22.896471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-11-19 10:55:22.896579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bfab0 with addr=10.0.0.2, port=4420 00:28:16.190 [2024-11-19 10:55:22.896621] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:16.190 [2024-11-19 10:55:22.896654] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:16.190 [2024-11-19 10:55:22.896673] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:16.190 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:16.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:16.190 Initializing NVMe Controllers 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.190 00:28:16.190 real 0m0.117s 00:28:16.190 user 0m0.055s 00:28:16.190 sys 0m0.062s 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 ************************************ 00:28:16.190 END TEST nvmf_target_disconnect_tc1 00:28:16.190 ************************************ 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 ************************************ 00:28:16.190 START TEST nvmf_target_disconnect_tc2 00:28:16.190 ************************************ 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1850917 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1850917 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1850917 ']' 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.190 10:55:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.190 [2024-11-19 10:55:23.033377] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:28:16.190 [2024-11-19 10:55:23.033421] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.190 [2024-11-19 10:55:23.118379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.190 [2024-11-19 10:55:23.160700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.190 [2024-11-19 10:55:23.160739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.190 [2024-11-19 10:55:23.160747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.190 [2024-11-19 10:55:23.160753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.190 [2024-11-19 10:55:23.160758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.190 [2024-11-19 10:55:23.162334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:16.191 [2024-11-19 10:55:23.162445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:16.191 [2024-11-19 10:55:23.162351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:16.191 [2024-11-19 10:55:23.162446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.191 Malloc0 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.191 [2024-11-19 10:55:23.347710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.191 [2024-11-19 10:55:23.379955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1850950 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:16.191 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:18.104 10:55:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1850917 00:28:18.104 10:55:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 [2024-11-19 10:55:25.414196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 [2024-11-19 10:55:25.414400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Write completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.104 Read completed with error (sct=0, sc=8) 00:28:18.104 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 [2024-11-19 10:55:25.414599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Write completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 Read completed with error (sct=0, sc=8) 00:28:18.105 starting I/O failed 00:28:18.105 [2024-11-19 10:55:25.414799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.105 [2024-11-19 10:55:25.415014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.415036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.415220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.415231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.415378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.415388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.415534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.415543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.415704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.415714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.415792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.415802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.416006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.416017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.416131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.416141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.416239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.416248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.416339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.416349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.416437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.416447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.416520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.416529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.416782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.416814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.417013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.417046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.417248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.417281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.417408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.417419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.417506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.417516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.417597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.417606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.417747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.417758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.417993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.418025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.418159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.418192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.418376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.105 [2024-11-19 10:55:25.418408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.105 qpair failed and we were unable to recover it. 00:28:18.105 [2024-11-19 10:55:25.418550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.418583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.418836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.418868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.419059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.419092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.419266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.419297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.419421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.419453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.419728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.419738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.419939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.419953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.420096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.420107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.420246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.420257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.420337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.420347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.420478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.420488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.420715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.420725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.420790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.420800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.420990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.421003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.421226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.421237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.421326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.421336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.421436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.421449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.421525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.421535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.421676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.421686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.421899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.421910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.422079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.422090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.422187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.422196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.422343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.422378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.422568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.422599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.422735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.422767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.422945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.422990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.423194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.423225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.423448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.423481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.423733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.423764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.423970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.424004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.424203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.424234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.424360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.424390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.424620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.424651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.424850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.424882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.425075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.425107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.425230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.425261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.425436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.425468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.106 qpair failed and we were unable to recover it. 00:28:18.106 [2024-11-19 10:55:25.425836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.106 [2024-11-19 10:55:25.425867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.426051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.426082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.426275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.426307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.426507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.426539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.426656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.426688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.426909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.426919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.427063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.427074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.427166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.427176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.427322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.427333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.427417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.427426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.427505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.427515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.427697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.427708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.427847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.427857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.428012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.428023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.428101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.428111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.428203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.428212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.428348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.428364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.428609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.428641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.428830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.428861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.429011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.429044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.429176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.429209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.429457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.429489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.429731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.429741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.429985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.429996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.430073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.430082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.430278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.430289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.430378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.430388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.430479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.430488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.430623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.430633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.430709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.430718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.430868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.430882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.431027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.431042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.431147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.431161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.431358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.431372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.431467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.431480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.431616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.431630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.431830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.431843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.431928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.107 [2024-11-19 10:55:25.431941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.107 qpair failed and we were unable to recover it. 00:28:18.107 [2024-11-19 10:55:25.432138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.432153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.432355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.432369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.432515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.432529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.432682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.432696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.432916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.432930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.433062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.433076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.433221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.433235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.433338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.433352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.433515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.433528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.433704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.433718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.433973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.433989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.434080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.434093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.434244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.434258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.434358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.434372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.434512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.434526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.434611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.434625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.434790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.434804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.434892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.434905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.435068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.435086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.435166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.435179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.435325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.435339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.435483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.435497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.435653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.435667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.435820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.435834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.436042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.436057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.436149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.436162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.436316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.436329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.436465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.436479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.436655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.436668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.436932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.436946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.437066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.437080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.108 [2024-11-19 10:55:25.437176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.108 [2024-11-19 10:55:25.437189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.108 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.437296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.437310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.437460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.437474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.437541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.437554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.437625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.437638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.437718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.437732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.437982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.437998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.438146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.438159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.438342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.438357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.438515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.438529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.438687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.438701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.438836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.438849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.438993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.439008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.439169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.439183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.439290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.439305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.439398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.439411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.439511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.439526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.439713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.439727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.439927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.439942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.440162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.440194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.440310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.440342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.440525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.440557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.440794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.440826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.441056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.441090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.441283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.441316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.441440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.441472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.441608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.441640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.441919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.442009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.442204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.442237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.442444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.442476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.442777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.442808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.443052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.443086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.443229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.443261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.443401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.443433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.443563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.443595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.443833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.443866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.444100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.444133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.444328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.109 [2024-11-19 10:55:25.444360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.109 qpair failed and we were unable to recover it. 00:28:18.109 [2024-11-19 10:55:25.444483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.444515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.444743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.444775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.444954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.444988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.445185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.445217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.445358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.445389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.445659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.445691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.445938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.445977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.446172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.446205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.446402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.446434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.446634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.446666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.446870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.446902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.447116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.447149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.447287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.447320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.447454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.447486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.447719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.447751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.447955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.447987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.448120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.448151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.448343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.448375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.448576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.448608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.448868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.448899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.449078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.449111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.449304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.449335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.449443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.449475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.449736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.449768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.449981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.450014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.450201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.450232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.450426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.450455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.450681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.450712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.450898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.450927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.451132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.451169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.451408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.451438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.451640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.451669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.451909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.451938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.452143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.452173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.452363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.452393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.452573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.452603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.452862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.452891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.110 [2024-11-19 10:55:25.453069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.110 [2024-11-19 10:55:25.453100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.110 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.453221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.453250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.453388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.453418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.453690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.453720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.453978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.454009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.454206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.454238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.454429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.454460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.454660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.454691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.454862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.454892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.455090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.455120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.455265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.455296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.455490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.455520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.455831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.455862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.456107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.456140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.456273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.456304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.456427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.456460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.456654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.456686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.456936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.456975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.457167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.457198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.457478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.457512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.457776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.457808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.457924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.457976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.458117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.458149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.458274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.458306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.458426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.458458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.458692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.458724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.458971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.459004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.459126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.459158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.459395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.459426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.459660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.459691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.459931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.459972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.460161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.460191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.460335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.460372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.460577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.460608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.460870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.460902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.461121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.461154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.461343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.461375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.461517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.111 [2024-11-19 10:55:25.461549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.111 qpair failed and we were unable to recover it. 00:28:18.111 [2024-11-19 10:55:25.461683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.461716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.461973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.462005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.462198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.462231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.462363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.462394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.462529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.462561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.462732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.462763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.463024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.463057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.463177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.463226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.463442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.463473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.463676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.463709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.463956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.463988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.464201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.464232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.464417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.464449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.464660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.464690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.464880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.464912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.465106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.465138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.465256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.465287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.465477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.465509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.465848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.465878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.466092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.466126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.466245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.466278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.466472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.466555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.466725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.466763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.467032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.467068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.467250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.467283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.467465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.467497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.467693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.467724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.467859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.467889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.468157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.468187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.468375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.468406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.468696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.468728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.468917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.468955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.469100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.469131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.469327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.469360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.469551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.469591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.469868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.469900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.470117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.470149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.470298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.112 [2024-11-19 10:55:25.470329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.112 qpair failed and we were unable to recover it. 00:28:18.112 [2024-11-19 10:55:25.470582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.470612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.470836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.470869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.471129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.471162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.471360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.471392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.471586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.471618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.471799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.471831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.472042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.472073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.472255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.472286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.472468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.472499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.472682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.472714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.472901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.472933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.473059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.473090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.473284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.473314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.473452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.473483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.473747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.473780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.474024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.474058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.474192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.474223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.474364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.474395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.474682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.474715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.474843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.474874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.475009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.475042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.475177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.475210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.475353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.475382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.475768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.475840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.476127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.476166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.476411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.476445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.476747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.476780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.477034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.477070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.477297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.477333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.477527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.477558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.477743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.477775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.477967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.478016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.478259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.478290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.113 qpair failed and we were unable to recover it. 00:28:18.113 [2024-11-19 10:55:25.478418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.113 [2024-11-19 10:55:25.478450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.478695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.478727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.478856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.478888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.479155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.479188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.479332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.479364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.479495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.479527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.479727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.479759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.479981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.480016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.480197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.480228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.480353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.480385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.480642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.480673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.480887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.480919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.481088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.481123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.481294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.481325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.481583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.481614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.481882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.481915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.482181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.482215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.482450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.482489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.482697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.482728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.482935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.482983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.483121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.483153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.483273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.483305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.483445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.483477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.483649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.483680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.483873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.483905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.484183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.484218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.484423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.484455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.484602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.484634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.484843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.484875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.485065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.485098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.485242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.485274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.485460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.485494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.485786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.485818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.485960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.485994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.486188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.486221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.486444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.486475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.486774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.486806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.487023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.114 [2024-11-19 10:55:25.487056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.114 qpair failed and we were unable to recover it. 00:28:18.114 [2024-11-19 10:55:25.487247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.487279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.487569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.487601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.487857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.487889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.488150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.488182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.488320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.488353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.488544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.488574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.488688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.488727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.488903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.488934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.489186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.489218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.489339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.489371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.489506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.489539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.489777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.489808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.489991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.490024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.490219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.490252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.490448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.490479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.490689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.490721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.490896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.490928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.491200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.491233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.491407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.491438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.491578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.491610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.491925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.491968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.492160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.492191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.492374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.492405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.492603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.492633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.492822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.492854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.493123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.493157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.493348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.493380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.493664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.493695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.493877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.493910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.494092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.494125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.494326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.494357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.494506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.494539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.494756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.494787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.495083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.495123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.495319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.495351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.495489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.495521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.495809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.495841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.495977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.115 [2024-11-19 10:55:25.496010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.115 qpair failed and we were unable to recover it. 00:28:18.115 [2024-11-19 10:55:25.496198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.496231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.496494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.496525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.496768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.496799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.497058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.497092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.497288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.497320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.497504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.497535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.497741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.497773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.497969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.498003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.498183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.498214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.498393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.498425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.498624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.498655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.498892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.498924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.499154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.499188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.499365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.499397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.499643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.499675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.499941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.499983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.500226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.500257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.500498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.500530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.500850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.500881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.501064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.501097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.501241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.501273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.501411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.501443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.501716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.501748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.502038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.502070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.502213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.502371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.502402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.502707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.502739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.503032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.503065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.503356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.503388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.503523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.503556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.503742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.503775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.503975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.504007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.504138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.504171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.504383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.504415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.504613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.504645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.504835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.504867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.505055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.505088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.505221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.116 [2024-11-19 10:55:25.505253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.116 qpair failed and we were unable to recover it. 00:28:18.116 [2024-11-19 10:55:25.505432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.505464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.505719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.505750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.505998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.506032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.506227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.506259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.506405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.506437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.506665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.506696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.506936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.506977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.507094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.507126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.507269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.507301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.507485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.507517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.507768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.507800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.508100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.508133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.508391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.508425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.508570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.508602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.508813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.508846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.509137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.509169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.509388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.509419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.509601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.509632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.509898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.509931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.510086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.510117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.510405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.510437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.510695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.510728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.510919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.510959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.511092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.511125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.511265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.511298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.511483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.511521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.511794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.511825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.512059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.512093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.512276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.512307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.512494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.512526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.512798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.512829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.513038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.513070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.513268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.513300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.513497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.513529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.513708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.513739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.513930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.513969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.514164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.117 [2024-11-19 10:55:25.514197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.117 qpair failed and we were unable to recover it. 00:28:18.117 [2024-11-19 10:55:25.514391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.514423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.514612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.514644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.514850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.514882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.515154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.515188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.515345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.515377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.515511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.515542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.515780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.515812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.515987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.516020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.516261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.516292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.516499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.516531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.516672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.516703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.516889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.516920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.517092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.517125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.517363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.517395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.517580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.517612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.517861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.517899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.518086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.518119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.518360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.518393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.518520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.518551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.518694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.518726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.519012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.519045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.519245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.519278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.519476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.519508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.519725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.519758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.520001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.520034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.520152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.520184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.520321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.520352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.520548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.520580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.520812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.520843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.521032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.521065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.521205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.521236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.521502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.521535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.521787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.521820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.522075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.118 [2024-11-19 10:55:25.522108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.118 qpair failed and we were unable to recover it. 00:28:18.118 [2024-11-19 10:55:25.522287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.522320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.522445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.522478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.522753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.522784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.522967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.523000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.523132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.523164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.523385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.523417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.523746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.523779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.523912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.523943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.524164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.524196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.524408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.524440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.524666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.524698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.524877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.524909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.525129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.525162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.525292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.525323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.525584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.525616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.525866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.525898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.526073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.526106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.526251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.526284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.526474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.526505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.526852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.526885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.527077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.527110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.527309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.527341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.527591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.527679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.528029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.528069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.528202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.528235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.528432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.528464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.528609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.528640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.528909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.528940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.529142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.529175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.529400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.529433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.529660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.529692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.529876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.529908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.530059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.530091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.530268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.530299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.530484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.530517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.530711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.530751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.531001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.531034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.119 qpair failed and we were unable to recover it. 00:28:18.119 [2024-11-19 10:55:25.531174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.119 [2024-11-19 10:55:25.531206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.531399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.531432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.531618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.531650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.531871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.531903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.532123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.532156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.532280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.532312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.532456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.532487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.532770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.532802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.533021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.533054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.533251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.533283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.533499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.533531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.533798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.533830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.533990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.534023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.534160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.534192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.534381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.534413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.534809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.534840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.535056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.535089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.535334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.535366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.535501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.535533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.535726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.535757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.536005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.536039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.536260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.536291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.536409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.536441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.536709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.536741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.536996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.537029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.537316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.537349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.537544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.537575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.537827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.537859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.538071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.538103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.538300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.538331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.538457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.538489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.538762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.538793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.538993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.539026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.539227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.539259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.539476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.539509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.539650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.539681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.539818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.539849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.120 qpair failed and we were unable to recover it. 00:28:18.120 [2024-11-19 10:55:25.540079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.120 [2024-11-19 10:55:25.540112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.540330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.540368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.540555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.540587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.540835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.540866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.541064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.541098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.541304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.541335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.541589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.541620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.541886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.541918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.542148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.542181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.542333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.542364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.542557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.542590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.542856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.542887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.543143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.543177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.121 [2024-11-19 10:55:25.543471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.121 [2024-11-19 10:55:25.543502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.121 qpair failed and we were unable to recover it. 00:28:18.397 [2024-11-19 10:55:25.543794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.397 [2024-11-19 10:55:25.543828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.543990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.544024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.544295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.544327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.544531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.544562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.544816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.544849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.545119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.545152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.545296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.545328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.545525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.545556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.545784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.545820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.546103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.546137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.546364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.546396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.546671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.546703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.547022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.547056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.547179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.547212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.547420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.547452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.547690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.547722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.547995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.548029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.548183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.548215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.548351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.548386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.548580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.548612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.548914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.549121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.549155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.549352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.549384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.549590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.549622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.549800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.549834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.550031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.550066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.550259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.550290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.550590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.550629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.550810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.550843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.551145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.551180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.551359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.551391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.551530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.551562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.551765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.551796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.552078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.552111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.552264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.552296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.552514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.552546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.552762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.398 [2024-11-19 10:55:25.552793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.398 qpair failed and we were unable to recover it. 00:28:18.398 [2024-11-19 10:55:25.553093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.553129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.553333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.553363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.553562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.553594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.553804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.553836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.554030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.554062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.554291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.554322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.554593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.554625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.554924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.554963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.555122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.555154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.555359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.555390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.555614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.555646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.555765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.555796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.556079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.556113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.556264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.556296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.556494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.556525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.556818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.556849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.557088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.557123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.557395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.557428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.557606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.557638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.557911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.557941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.558150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.558183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.558387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.558418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.558617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.558649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.558920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.558959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.559250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.559283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.559577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.559608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.559884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.559916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.560205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.560238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.560496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.560528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.560727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.560758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.561034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.561068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.561286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.561319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.561545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.561577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.561756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.561787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.561982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.562016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.562216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.399 [2024-11-19 10:55:25.562247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.399 qpair failed and we were unable to recover it. 00:28:18.399 [2024-11-19 10:55:25.562444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.562475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.562731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.562762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.563041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.563074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.563365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.563398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.563605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.563636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.563930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.563970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.564130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.564162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.564442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.564474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.564679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.564712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.564944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.564988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.565241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.565273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.565572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.565604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.565919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.565960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.566164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.566195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.566351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.566383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.566587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.566619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.566843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.566876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.567181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.567215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.567486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.567517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.567736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.567768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.567993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.568026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.568281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.568319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.568515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.568546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.568819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.568851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.569051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.569085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.569326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.569357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.569576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.569609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.569879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.569911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.570173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.570207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.570353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.570384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.570651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.570684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.570978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.571011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.571236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.571268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.571466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.571499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.571770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.571802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.572084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.572118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.572265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.572297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.400 [2024-11-19 10:55:25.572502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.400 [2024-11-19 10:55:25.572533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.400 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.572820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.572854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.573117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.573150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.573358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.573390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.573514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.573546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.573807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.573838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.574035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.574067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.574284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.574316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.574599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.574632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.574910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.574941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.575160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.575193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.575420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.575453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.575644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.575676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.575821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.575853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.576080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.576113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.576304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.576337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.576539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.576570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.576800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.576831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.577108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.577141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.577431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.577463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.577736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.577768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.578089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.578122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.578314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.578347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.578503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.578536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.578737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.578774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.578917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.578955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.579177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.579209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.579415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.579447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.579697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.579729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.580009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.580043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.580198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.580229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.580447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.580479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.580795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.580827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.581082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.581116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.581325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.581357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.581499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.581532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.581855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.581887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.582137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.401 [2024-11-19 10:55:25.582171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.401 qpair failed and we were unable to recover it. 00:28:18.401 [2024-11-19 10:55:25.582455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.582488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.582620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.582653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.582969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.583003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.583197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.583230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.583509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.583542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.583794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.583826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.584102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.584136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.584342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.584374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.584577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.584609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.584791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.584823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.585084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.585117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.585375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.585408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.585666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.585697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.586008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.586040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.586221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.586253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.586451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.586483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.586691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.586723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.586943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.586985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.587120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.587152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.587306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.587338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.587552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.587584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.587783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.587817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.588066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.588099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.588238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.588271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.588524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.588555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.588738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.588770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.588970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.589011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.589129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.589160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.589414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.589444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.589721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.589753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.402 qpair failed and we were unable to recover it. 00:28:18.402 [2024-11-19 10:55:25.589983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.402 [2024-11-19 10:55:25.590017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.590221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.590254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.590504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.590535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.590802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.590835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.591021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.591054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.591333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.591365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.591568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.591600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.591795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.591827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.592017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.592050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.592199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.592231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.592386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.592419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.592571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.592604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.592782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.592812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.593046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.593155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.593354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.593385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.593578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.593612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.593888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.593919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.594208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.594241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.594433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.594466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.594631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.594663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.594870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.594901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.595197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.595231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.595429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.595460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.595763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.595795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.596000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.596035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.596262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.596294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.596499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.596530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.596750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.596783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.596918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.596956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.597160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.597190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.597397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.597428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.597659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.597690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.597891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.597921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.598096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.598128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.598249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.598282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.598464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.598495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.598883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.598922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.599218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.403 [2024-11-19 10:55:25.599251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.403 qpair failed and we were unable to recover it. 00:28:18.403 [2024-11-19 10:55:25.599452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.599485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.599689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.599720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.599905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.599937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.600145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.600177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.600428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.600460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.600590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.600621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.600904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.600937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.601217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.601249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.601442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.601474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.601798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.601830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.602112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.602146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.602427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.602458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.602677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.602710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.602994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.603028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.603171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.603202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.603355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.603387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.603590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.603623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.603764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.603795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.604044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.604078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.604331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.604363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.604649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.604680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.604889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.604922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.605064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.605098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.605305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.605336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.605540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.605572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.605778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.605811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.606069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.606103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.606213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.606244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.606494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.606524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.606759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.606791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.606997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.607030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.607216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.607249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.607501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.607534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.607747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.607779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.608051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.608085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.608385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.608416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.608618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.608650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.404 [2024-11-19 10:55:25.608929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.404 [2024-11-19 10:55:25.608970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.404 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.609178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.609216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.609474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.609506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.609818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.609850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.609990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.610025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.610210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.610241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.610458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.610492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.610630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.610662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.610917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.610958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.611255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.611288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.611447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.611480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.611706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.611738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.611989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.612022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.612158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.612190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.612324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.612356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.612598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.612630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.612860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.612893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.613154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.613188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.613386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.613419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.613620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.613653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.613932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.613999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.614252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.614285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.614413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.614445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.614735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.614767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.615046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.615080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.615231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.615264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.615487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.615520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.615731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.615764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.616011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.616045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.616201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.616234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.616489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.616521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.616803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.616836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.617093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.617127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.617330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.617363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.617550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.617583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.617794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.617827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.618093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.618127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.618260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.405 [2024-11-19 10:55:25.618292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.405 qpair failed and we were unable to recover it. 00:28:18.405 [2024-11-19 10:55:25.618488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.618521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.618737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.618770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.618972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.619007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.619234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.619273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.619498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.619529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.619727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.619760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.620066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.620100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.620357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.620389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.620605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.620638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.620849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.620882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.621084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.621118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.621371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.621404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.621717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.621750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.622034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.622066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.622282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.622314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.622431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.622463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.622673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.622706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.622904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.622936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.623175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.623208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.623410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.623442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.623712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.623744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.623936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.623980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.624206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.624239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.624389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.624420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.624746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.624781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.624982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.625015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.625213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.625246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.625452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.625484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.625804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.625838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.626035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.626068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.626255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.626287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.626441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.626472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.626684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.626715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.626922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.626983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.627187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.627218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.627469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.627502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.627660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.627692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.627959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.406 [2024-11-19 10:55:25.627993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.406 qpair failed and we were unable to recover it. 00:28:18.406 [2024-11-19 10:55:25.628274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.628305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.628512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.628543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.628735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.628767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.629049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.629083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.629337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.629370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.629522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.629561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.629835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.629866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.630126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.630161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.630417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.630450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.630707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.630740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.631042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.631076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.631295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.631328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.631602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.631634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.631887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.631920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.632139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.632172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.632307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.632339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.632487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.632519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.632794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.632827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.633050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.633084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.633298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.633331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.633482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.633514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.633730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.633762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.633944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.634003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.634187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.634219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.634409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.634442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.634665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.634695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.634973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.635006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.635162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.635193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.635357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.635389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.635552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.635583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.635720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.635752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.636029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.636062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.636260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.407 [2024-11-19 10:55:25.636293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.407 qpair failed and we were unable to recover it. 00:28:18.407 [2024-11-19 10:55:25.636447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.636477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.636762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.636794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.637123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.637157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.637441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.637474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.637702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.637734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.638018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.638053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.638335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.638366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.638652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.638684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.638867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.638899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.639071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.639104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.639403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.639436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.639593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.639626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.639820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.639859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.640089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.640123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.640344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.640377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.640504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.640535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.640867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.640899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.641109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.641142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.641325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.641357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.641566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.641597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.641875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.641908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.642114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.642147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.642386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.642418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.642653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.642685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.642958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.642991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.643207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.643239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.643588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.643620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.643901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.643933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.644098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.644130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.644330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.644362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.644615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.644648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.644830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.644862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.645073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.645106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.645306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.645337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.645538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.645569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.645777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.645813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.646004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.646039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.408 [2024-11-19 10:55:25.646231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.408 [2024-11-19 10:55:25.646265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.408 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.646424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.646457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.646779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.646811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.647055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.647088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.647231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.647265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.647482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.647514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.647816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.647850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.648131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.648164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.648368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.648401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.648545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.648577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.648850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.648883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.649100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.649133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.649395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.649428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.649693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.649725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.649927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.649969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.650173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.650212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.650406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.650439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.650761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.650792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.650940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.650981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.651186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.651219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.651501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.651533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.651809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.651840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.652077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.652112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.652310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.652342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.652550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.652582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.652855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.652887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.653093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.653127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.653333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.653365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.653633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.653665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.653863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.653895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.654087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.654121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.654335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.654366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.654621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.654654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.654845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.654876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.655129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.655163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.655365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.655398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.655543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.655576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.655800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.655831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.409 qpair failed and we were unable to recover it. 00:28:18.409 [2024-11-19 10:55:25.656046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.409 [2024-11-19 10:55:25.656079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.656283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.656314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.656571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.656603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.656836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.656867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.657127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.657161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.657346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.657378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.657523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.657555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.657739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.657771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.658065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.658099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.658302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.658334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.658484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.658517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.658718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.658750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.659049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.659083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.659291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.659322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.659465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.659498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.659821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.659853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.660121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.660155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.660307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.660345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.660638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.660670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.660920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.661136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.661169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.661362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.661394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.661591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.661624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.661748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.661780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.662078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.662112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.662373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.662404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.662655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.662688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.662988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.663021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.663275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.663307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.663524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.663557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.663818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.663851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.664080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.664113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.664322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.664354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.664641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.664673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.664970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.665004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.665144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.665178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.665423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.665458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.665684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.410 [2024-11-19 10:55:25.665716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.410 qpair failed and we were unable to recover it. 00:28:18.410 [2024-11-19 10:55:25.665912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.665944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.666118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.666150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.666406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.666437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.666566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.666597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.666820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.666852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.666992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.667025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.667303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.667336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.667600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.667632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.667887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.667920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.668083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.668115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.668333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.668366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.668526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.668558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.668772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.668805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.669073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.669106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.669363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.669396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.669593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.669624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.669817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.669849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.669981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.670012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.670242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.670275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.670480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.670516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.670796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.670828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.671055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.671087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.671239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.671270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.671522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.671555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.671805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.671837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.672034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.672067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.672214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.672246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.672387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.672417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.672643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.672675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.672859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.672891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.673081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.673115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.673324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.673355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.673480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.673511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.411 qpair failed and we were unable to recover it. 00:28:18.411 [2024-11-19 10:55:25.673714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.411 [2024-11-19 10:55:25.673746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.673927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.673968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.674211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.674244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.674354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.674386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.674656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.674688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.674837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.674869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.675060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.675094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.675301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.675334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.675470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.675501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.675716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.675749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.675990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.676024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.676329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.676362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.676541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.676572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.676844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.676878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.677081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.677114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.677317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.677350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.677543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.677830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.677863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.678174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.678207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.678505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.678538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.678817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.678849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.679134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.679168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.679445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.679477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.679626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.679659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.679932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.679987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.680190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.680222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.680409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.680448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.680778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.680810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.681004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.681037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.681239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.681272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.681428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.681461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.681784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.681816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.682045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.682079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.682319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.682350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.682556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.682589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.682865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.682897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.683144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.683179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.683331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.412 [2024-11-19 10:55:25.683363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.412 qpair failed and we were unable to recover it. 00:28:18.412 [2024-11-19 10:55:25.683558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.683591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.683865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.683897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.684150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.684183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.684333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.684366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.684504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.684536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.684806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.684837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.685032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.685067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.685221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.685252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.685455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.685488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.685797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.685829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.686025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.686078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.686242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.686273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.686504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.686536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.686742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.686774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.686972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.687005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.687208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.687240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.687467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.687500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.687795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.687828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.687995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.688029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.688304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.688336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.688565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.688597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.688844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.688876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.689141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.689175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.689317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.689350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.689549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.689582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.689791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.689823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.690105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.690139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.690363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.690395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.690516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.690555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.690823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.690855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.691005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.691038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.691348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.691381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.691577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.691609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.691804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.691836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.692053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.692086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.692312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.692344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.692574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.692607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.692859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.413 [2024-11-19 10:55:25.692891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.413 qpair failed and we were unable to recover it. 00:28:18.413 [2024-11-19 10:55:25.693146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.693180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.693438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.693470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.693777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.693811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.694003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.694037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.694271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.694303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.694512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.694543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.694733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.694765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.695018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.695051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.695248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.695280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.695488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.695520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.695792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.695825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.696036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.696068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.696274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.696307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.696444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.696474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.696749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.696782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.697077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.697110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.697250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.697283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.697508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.697546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.697832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.697864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.698185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.698218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.698474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.698506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.698817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.698848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.698997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.699031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.699279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.699311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.699464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.699496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.699703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.699735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.700006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.700040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.700231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.700263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.700465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.700498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.700710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.700742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.700943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.700984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.701191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.701222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.701370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.701401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.701584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.701617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.701922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.701964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.702177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.702208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.702354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.702387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.414 [2024-11-19 10:55:25.702602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.414 [2024-11-19 10:55:25.702633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.414 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.702846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.702878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.703027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.703060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.703247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.703278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.703533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.703566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.703864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.703897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.704049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.704083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.704290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.704323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.704622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.704654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.704941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.704984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.705146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.705177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.705373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.705407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.705669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.705701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.705917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.705956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.706164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.706196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.706403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.706435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.706593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.706625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.706830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.706862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.707074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.707107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.707255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.707287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.707452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.707491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.707782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.707814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.708082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.708115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.708385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.708417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.708641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.708673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.708975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.709008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.709164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.709195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.709450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.709482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.709690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.709720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.709860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.709892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.710091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.710124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.710331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.710363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.710637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.710671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.710857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.710888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.711141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.711175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.711364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.711395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.711674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.711706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.712003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.712037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.415 qpair failed and we were unable to recover it. 00:28:18.415 [2024-11-19 10:55:25.712264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.415 [2024-11-19 10:55:25.712297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.712448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.712481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.712782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.712814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.712975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.713009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.713133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.713163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.713318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.713348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.713484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.713516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.713715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.713747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.713972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.714004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.714171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.714203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.714398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.714429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.714666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.714698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.714921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.714961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.715098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.715129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.715320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.715351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.715506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.715537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.715656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.715687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.715820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.715850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.716046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.716080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.716207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.716236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.716457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.716489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.716723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.716755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.716910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.716956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.717105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.717136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.717267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.717298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.717504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.717534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.717792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.717824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.718054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.718088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.718285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.718317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.718536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.718568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.718841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.416 [2024-11-19 10:55:25.718873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.416 qpair failed and we were unable to recover it. 00:28:18.416 [2024-11-19 10:55:25.719088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.719121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.719330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.719361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.719596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.719629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.719930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.719992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.720267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.720300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.720512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.720543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.720738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.720769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.720962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.720997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.721246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.721278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.721511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.721544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.721806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.721838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.722128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.722162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.722361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.722392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.722538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.722570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.722749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.722781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.722905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.722937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.723107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.723139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.723342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.723374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.723677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.723708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.723904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.723935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.724103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.724133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.724330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.724362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.724568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.724599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.724813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.724846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.725076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.725110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.725243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.725275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.725469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.725500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.725842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.725875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.726159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.726194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.726345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.726378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.726572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.726604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.726745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.726782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.727082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.727114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.727262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.727293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.727514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.727546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.417 [2024-11-19 10:55:25.727818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.417 [2024-11-19 10:55:25.727850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.417 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.728110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.728143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.728434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.728465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.728729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.728761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.729070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.729104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.729359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.729390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.729594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.729627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.729871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.729903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.730213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.730246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.730377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.730410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.730726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.730758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.731015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.731049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.731201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.731233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.731438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.731470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.731703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.731915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.731946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.732232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.732265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.732403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.732433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.732670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.732702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.732907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.732939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.733071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.733104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.733355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.733387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.733652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.733684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.733921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.734002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.734164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.734196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.734410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.734442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.734583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.734614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.734842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.734875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.735073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.735106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.735315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.735347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.735494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.735526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.735724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.735755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.735970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.736004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.736213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.736245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.736444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.736475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.736788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.736819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.418 [2024-11-19 10:55:25.737109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.418 [2024-11-19 10:55:25.737150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.418 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.737332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.737363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.737572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.737605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.737876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.737909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.738164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.738197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.738495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.738527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.738826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.739032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.739066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.739320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.739354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.739497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.739529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.739815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.739846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.740078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.740112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.740235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.740267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.740397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.740427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.740627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.740657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.740879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.740912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.741139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.741171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.741448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.741480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.741676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.741707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.741900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.741931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.742165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.742198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.742398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.742432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.742651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.742683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.742992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.743026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.743240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.743273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.743476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.743507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.743708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.743739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.744022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.744057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.744259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.744290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.744433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.744466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.744799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.744830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.745085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.745120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.745319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.745350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.745536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.745568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.745768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.745813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.746082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.746115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.746229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.746261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.746397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.746427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.419 qpair failed and we were unable to recover it. 00:28:18.419 [2024-11-19 10:55:25.746654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.419 [2024-11-19 10:55:25.746685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.746842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.746873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.747195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.747235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.747511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.747544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.747826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.747860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.748064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.748098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.748258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.748291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.748445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.748477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.748757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.748789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.749003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.749037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.749200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.749232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.749421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.749453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.749668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.749699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.749915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.749956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.750162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.750194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.750470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.750500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.750780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.750814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.751120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.751153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.751355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.751388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.751654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.751686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.751938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.751983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.752182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.752215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.752424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.752456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.752760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.752793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.753037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.753070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.753338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.753369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.753620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.753653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.753793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.753825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.754089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.754122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.754337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.754369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.754662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.754695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.754983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.755017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.755296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.755329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.755526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.755559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.755867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.755900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.756179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.756211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.756470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.756502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.756810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.420 [2024-11-19 10:55:25.756843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.420 qpair failed and we were unable to recover it. 00:28:18.420 [2024-11-19 10:55:25.757117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.757150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.757300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.757332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.757535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.757568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.757862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.757893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.758182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.758223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.758461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.758493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.758798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.758830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.759130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.759165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.759357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.759390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.759575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.759609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.759803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.759835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.760048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.760082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.760285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.760317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.760519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.760552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.760752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.760784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.761088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.761121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.761270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.761303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.761554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.761586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.761794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.761825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.762049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.762084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.762206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.762238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.762460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.762492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.762697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.762729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.762924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.762982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.763235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.763267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.763522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.763555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.763860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.763893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.764104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.764135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.764348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.764380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.764681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.764713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.764985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.765017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.765329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.421 [2024-11-19 10:55:25.765404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.421 qpair failed and we were unable to recover it. 00:28:18.421 [2024-11-19 10:55:25.765759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.765797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.766008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.766044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.766297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.766330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.766644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.766676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.766890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.766922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.767129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.767161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.767304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.767336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.767638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.767670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.767891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.767923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.768093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.768125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.768324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.768357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.768484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.768516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.768796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.768845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.769141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.769176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.769307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.769340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.769543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.769575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.769849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.769882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.770083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.770117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.770368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.770401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.770593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.770627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.770843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.770876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.771072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.771105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.771299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.771331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.771612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.771643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.771845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.771878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.772062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.772095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.772391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.772425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.772555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.772587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.772842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.772875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.773160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.773194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.773356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.773389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.773614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.773645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.773896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.773928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.774146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.774178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.774385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.774417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.774679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.774712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.775006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.775040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.422 [2024-11-19 10:55:25.775258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.422 [2024-11-19 10:55:25.775290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.422 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.775591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.775623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.775887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.775923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.776085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.776119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.776338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.776370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.776560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.776592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.776868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.776899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.777123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.777156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.777406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.777437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.777753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.777784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.778040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.778072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.778287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.778319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.778537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.778569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.778818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.778850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.779050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.779082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.779342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.779381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.779632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.779663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.779846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.779877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.780172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.780204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.780399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.780431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.780549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.780581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.780854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.780886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.781161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.781193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.781417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.781449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.781713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.781745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.781888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.781920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.782178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.782209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.782365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.782398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.782712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.782743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.783041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.783074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.783345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.783377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.783509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.783540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.783745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.783776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.784070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.784104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.784301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.784332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.784589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.784622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.784824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.784856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.785056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.785089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.423 qpair failed and we were unable to recover it. 00:28:18.423 [2024-11-19 10:55:25.785309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.423 [2024-11-19 10:55:25.785340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.785547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.785578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.785855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.785886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.786198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.786231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.786499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.786576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.786881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.786917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.787146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.787181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.787399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.787430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.787631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.787662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.787855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.787887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.788079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.788111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.788324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.788355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.788550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.788581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.788782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.788814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.789012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.789045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.789330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.789362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.789654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.789686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.790010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.790157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.790189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.790384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.790415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.790681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.790712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.791016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.791049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.791267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.791297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.791560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.791591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.791808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.791839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.792035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.792069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.792342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.792374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.792552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.792584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.792859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.792890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.793181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.793213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.793494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.793525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.793811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.793844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.794103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.794136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.794336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.794367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.794640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.794672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.794925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.794969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.795269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.795302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.795578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.424 [2024-11-19 10:55:25.795609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-11-19 10:55:25.795860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.795892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.796175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.796209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.796493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.796524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.796751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.796781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.797050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.797084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.797294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.797327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.797535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.797568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.797843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.797875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.798133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.798167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.798415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.798447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.798724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.798756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.799040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.799074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.799356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.799388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.799605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.799637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.799917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.799957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.800241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.800273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.800486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.800518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.800718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.800751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.800971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.801004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.801282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.801320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.801573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.801605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.801883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.801915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.802125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.802158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.802409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.802441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.802742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.802773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.802990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.803024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.803237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.803269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.803466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.803498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.803700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.803733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.804005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.804039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.804321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.804353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.804573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.804606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.804862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.804893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.805107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.425 [2024-11-19 10:55:25.805140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-11-19 10:55:25.805322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.805354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.805627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.805659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.805766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.805796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.805977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.806010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.806283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.806314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.806592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.806624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.806918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.806968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.807265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.807297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.807555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.807587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.807841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.807872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.808126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.808160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.808456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.808489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.808703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.808735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.809007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.809041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.809190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.809222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.809336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.809368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.809639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.809670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.809865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.809896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.810183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.810217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.810482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.810514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.810787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.810819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.811045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.811077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.811295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.811327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.811603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.811634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.811900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.811931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.812233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.812272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.812521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.812553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.812756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.812787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.812922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.812965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.813261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.813293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.813444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.813475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.813749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.813780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.813911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.813943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.814082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.814115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.814300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.814331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.814558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.814590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.814816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.814847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-11-19 10:55:25.815104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.426 [2024-11-19 10:55:25.815138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.815338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.815370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.815649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.815683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.815928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.815970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.816271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.816302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.816520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.816551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.816828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.816859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.817057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.817091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.817345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.817377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.817582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.817614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.817890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.817922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.818072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.818104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.818401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.818433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.818657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.818689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.818871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.818902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.819097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.819131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.819407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.819439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.819658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.819691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.819874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.819906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.820191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.820226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.820487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.820518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.820821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.820852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.821044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.821078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.821190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.821221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.821498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.821530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.821779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.821811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.822019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.822051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.822323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.822355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.822608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.822647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.822962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.822995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.823272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.823304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.823579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.823611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.823904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.823936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.824232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.824266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.824540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.824571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.824861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.824895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.825174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.825209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.825512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.825543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.427 [2024-11-19 10:55:25.825808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.427 [2024-11-19 10:55:25.825840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.826165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.826200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.826401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.826432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.826545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.826576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.826837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.826870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.827149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.827182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.827515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.827546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.827797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.827829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.828033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.828066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.828344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.828377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.828571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.828603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.828860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.828892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.829152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.829185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.829482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.829512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.428 [2024-11-19 10:55:25.829786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.428 [2024-11-19 10:55:25.829817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.428 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.830098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.830133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.830418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.830449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.830732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.830770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.831051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.831085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.831364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.831395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.831653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.831684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.831968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.832002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.832253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.832284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.832545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.832577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.832790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.832822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.833024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.833057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.833327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.833357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.833643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.833674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.833970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.834004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.834280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.834313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.704 [2024-11-19 10:55:25.834500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.704 [2024-11-19 10:55:25.834532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.704 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.834819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.834851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.835118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.835153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.835451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.835482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.835711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.835743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.835943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.836001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.836266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.836297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.836597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.836629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.836831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.836863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.837062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.837095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.837277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.837309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.837530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.837561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.837838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.837870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.838082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.838115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.838401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.838433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.838702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.838734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.838936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.838980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.839176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.839207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.839429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.839459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.839714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.839745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.840010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.840043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.840223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.840254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.840504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.840535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.840834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.840865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.841164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.841198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.841467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.841499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.841797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.841827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.842103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.842142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.842423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.842454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.842646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.842677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.842967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.842999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.843209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.843242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.843517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.843549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.843817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.843849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.844149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.844183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.844475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.844508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.844779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.844810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.845030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.845064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.705 qpair failed and we were unable to recover it. 00:28:18.705 [2024-11-19 10:55:25.845246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.705 [2024-11-19 10:55:25.845277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.845407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.845440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.845620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.845651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.845907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.845939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.846131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.846164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.846446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.846478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.846745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.846775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.846992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.847026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.847295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.847327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.847507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.847540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.847732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.847765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.848025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.848060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.848357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.848389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.848658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.848690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.848985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.849017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.849238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.849271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.849417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.849450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.849631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.849663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.849967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.850001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.850282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.850313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.850511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.850542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.850805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.850836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.850984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.851017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.851288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.851320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.851522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.851553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.851813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.851844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.852049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.852081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.852285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.852317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.852591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.852621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.852816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.852853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.853113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.853146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.853444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.853474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.853684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.853715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.854002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.854036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.854313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.854344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.854631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.854662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.854892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.854923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.855149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.706 [2024-11-19 10:55:25.855183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.706 qpair failed and we were unable to recover it. 00:28:18.706 [2024-11-19 10:55:25.855406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.855438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.855689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.855720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.855911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.855942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.856257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.856289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.856486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.856517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.856705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.856737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.856935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.856980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.857243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.857274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.857552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.857583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.857765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.857796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.857991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.858025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.858226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.858257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.858476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.858507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.858708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.858740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.858925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.858978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.859169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.859201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.859453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.859483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.859681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.859713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.859908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.859941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.860145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.860178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.860446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.860477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.860701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.860732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.860998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.861032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.861337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.861369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.861570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.861601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.861802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.861834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.862092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.862127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.862328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.862360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.862633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.862664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.862968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.863002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.863270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.863301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.863428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.863466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.863666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.863697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.863876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.863907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.864190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.864223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.864402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.864434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.864710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.864740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.865017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.707 [2024-11-19 10:55:25.865050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.707 qpair failed and we were unable to recover it. 00:28:18.707 [2024-11-19 10:55:25.865331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.865363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.865650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.865681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.865967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.866000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.866280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.866312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.866440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.866471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.866731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.866762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.867060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.867094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.867362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.867394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.867674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.867706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.867966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.867999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.868220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.868252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.868497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.868529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.868709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.868742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.868938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.868982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.869236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.869268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.869539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.869572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.869867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.869899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.870130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.870163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.870364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.870396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.870674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.870707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.870986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.871020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.871308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.871339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.871621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.871653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.871938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.871983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.872250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.872281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.872590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.872622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.872881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.872912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.873146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.873179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.873382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.873414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.873606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.873638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.873823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.873854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.874146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.874179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.874461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.874491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.874781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.874820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.875059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.875092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.875345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.875377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.875661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.708 [2024-11-19 10:55:25.875691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.708 qpair failed and we were unable to recover it. 00:28:18.708 [2024-11-19 10:55:25.875946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.875989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.876201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.876233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.876490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.876523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.876805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.876837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.877114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.877147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.877347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.877378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.877562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.877593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.877841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.877872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.878093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.878127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.878345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.878377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.878562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.878593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.878869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.878901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.879115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.879149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.879427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.879457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.879660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.879693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.879998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.880030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.880297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.880328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.880625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.880657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.880960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.880994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.881256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.881288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.881569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.881600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.881882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.881914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.882195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.882227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.882517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.882549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.882776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.882808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.883061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.883094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.883354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.883388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.883659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.883691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.883969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.884003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.884291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.884323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.884589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.884621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.709 qpair failed and we were unable to recover it. 00:28:18.709 [2024-11-19 10:55:25.884869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.709 [2024-11-19 10:55:25.884901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.885111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.885145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.885413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.885446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.885627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.885659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.885933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.885973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.886234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.886273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.886570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.886602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.886869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.886901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.887174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.887209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.887476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.887508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.887627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.887659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.887887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.887920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.888205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.888284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.888587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.888624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.888918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.888966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.889223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.889256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.889469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.889502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.889770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.889802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.890092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.890126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.890398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.890429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.890651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.890683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.890884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.890918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.891124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.891158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.891293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.891326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.891600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.891633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.891894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.891926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.892194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.892227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.892506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.892538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.892681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.892713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.892934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.892973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.893279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.893312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.893562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.893593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.893887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.893925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.894152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.894185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.894435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.894468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.894715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.894747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.895025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.895059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.895262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.895293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.710 qpair failed and we were unable to recover it. 00:28:18.710 [2024-11-19 10:55:25.895545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.710 [2024-11-19 10:55:25.895578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.895777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.895809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.896089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.896123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.896422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.896455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.896722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.896754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.896965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.896999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.897176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.897208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.897334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.897366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.897650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.897683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.897967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.898001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.898209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.898242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.898529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.898561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.898787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.898820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.899072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.899106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.899295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.899327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.899529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.899561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.899810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.899842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.900142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.900176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.900445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.900478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.900601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.900633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.900908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.900940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.901207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.901247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.901462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.901494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.901688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.901720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.901922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.901960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.902180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.902212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.902433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.902465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.902663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.902696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.902959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.902992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.903213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.903244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.903443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.903476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.903730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.903763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.904062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.904097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.904243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.904276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.904552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.904583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.904769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.904802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.905023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.905057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.905328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.905361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.711 [2024-11-19 10:55:25.905506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.711 [2024-11-19 10:55:25.905538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.711 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.905655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.905687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.905829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.905861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.906044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.906078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.906294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.906326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.906527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.906560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.906835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.906867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.907144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.907179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.907375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.907406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.907660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.907692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.907997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.908031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.908315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.908349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.908622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.908653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.908970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.909006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.909278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.909311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.909441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.909474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.909746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.909779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.909981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.910014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.910144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.910177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.910355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.910388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.910689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.910721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.910960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.910994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.911272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.911304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.911448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.911481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.911737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.911771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.911887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.911919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.912200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.912234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.912515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.912548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.912831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.912863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.913010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.913044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.913264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.913296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.913568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.913600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.913893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.913924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.914232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.914266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.914526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.914558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.914691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.914724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.915003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.915037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.915315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.915348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.915615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.712 [2024-11-19 10:55:25.915646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.712 qpair failed and we were unable to recover it. 00:28:18.712 [2024-11-19 10:55:25.915944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.915987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.916128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.916162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.916425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.916458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.916722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.916755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.916961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.916996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.917273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.917306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.917602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.917635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.917904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.917937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.918192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.918226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.918428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.918460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.918758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.918790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.919063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.919098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.919320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.919364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.919613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.919645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.919956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.919989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.920266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.920299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.920489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.920521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.920788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.920821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.920966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.920999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.921299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.921332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.921609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.921641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.921929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.921972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.922236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.922268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.922555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.922587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.922715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.922747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.922957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.922990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.923296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.923329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.923480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.923512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.923761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.923793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.924091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.924126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.924321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.924352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.924584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.924616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.924868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.924900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.713 [2024-11-19 10:55:25.925158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.713 [2024-11-19 10:55:25.925191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.713 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.925379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.925411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.925647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.925681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.925886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.925918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.926204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.926238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.926440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.926472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.926653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.926692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.926887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.926918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.927154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.927187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.927418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.927450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.927703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.927736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.927965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.927999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.928110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.928143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.928394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.928425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.928675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.928708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.928904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.928936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.929157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.929191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.929403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.929436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.929707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.929740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.930014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.930047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.930337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.930370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.930565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.930597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.930847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.930880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.931136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.931170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.931374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.931407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.931680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.931712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.931995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.932029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.932173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.932204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.932410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.932444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.932625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.932657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.932935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.932985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.933259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.933291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.933569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.933600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.933863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.933895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.934146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.934182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.934398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.934431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.934700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.934732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.935031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.935065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.935281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.714 [2024-11-19 10:55:25.935313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.714 qpair failed and we were unable to recover it. 00:28:18.714 [2024-11-19 10:55:25.935587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.935619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.935920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.935976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.936290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.936322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.936601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.936634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.936921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.936963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.937236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.937269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.937451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.937483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.937759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.937792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.938077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.938111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.938394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.938428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.938703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.938735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.938994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.939028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.939271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.939302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.939560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.939592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.939846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.939878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.940097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.940130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.940326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.940358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.940625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.940658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.940803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.940835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.941030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.941064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.941336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.941368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.941654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.941686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.941973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.942009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.942241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.942275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.942491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.942523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.942706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.942738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.943004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.943038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.943317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.943350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.943462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.943494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.943676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.943710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.943926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.943967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.944233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.944267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.944524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.944557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.944779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.944812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.945008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.945041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.945295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.945334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.945640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.945672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.945966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.715 [2024-11-19 10:55:25.945999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.715 qpair failed and we were unable to recover it. 00:28:18.715 [2024-11-19 10:55:25.946269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.946301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.946529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.946562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.946788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.946821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.947091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.947125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.947312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.947344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.947630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.947664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.947931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.947972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.948233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.948265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.948410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.948442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.948719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.948751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.949053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.949087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.949352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.949386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.949585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.949618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.949891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.949924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.950215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.950249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.950522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.950555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.950754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.950784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.950972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.951004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.951276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.951306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.951448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.951479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.951730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.951758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.951939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.951983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.952237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.952267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.952535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.952565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.952859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.952894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.953178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.953210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.953463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.953494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.953791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.953823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.954120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.954152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.954346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.954378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.954637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.954668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.954873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.954904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.955174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.955206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.955459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.955490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.955697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.955727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.955929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.955970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.956171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.956202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.956454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.716 [2024-11-19 10:55:25.956486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.716 qpair failed and we were unable to recover it. 00:28:18.716 [2024-11-19 10:55:25.956742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.956775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.956972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.957006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.957150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.957183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.957458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.957491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.957695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.957728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.957982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.958019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.958293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.958327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.958542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.958574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.958787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.958820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.959024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.959058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.959333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.959366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.959641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.959673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.959892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.959926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.960207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.960246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.960483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.960516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.960699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.960731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.961035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.961069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.961341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.961373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.961653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.961685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.961889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.961920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.962213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.962247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.962439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.962471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.962728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.962761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.963062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.963097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.963235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.963267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.963555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.963587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.963886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.963919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.964208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.964243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.964492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.964525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.964731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.964764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.965036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.965070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.965263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.965296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.965496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.965528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.965782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.965814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.966016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.966049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.966189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.966221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.966497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.966530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.966751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.717 [2024-11-19 10:55:25.966784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.717 qpair failed and we were unable to recover it. 00:28:18.717 [2024-11-19 10:55:25.967064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.967099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.967382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.967415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.967692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.967725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.967959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.967993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.968270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.968303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.968504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.968536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.968737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.968770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.969050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.969085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.969315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.969348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.969546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.969578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.969765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.969798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.970079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.970114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.970251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.970285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.970518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.970551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.970750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.970789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.971065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.971099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.971371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.971406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.971533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.971565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.971757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.971790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.972041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.972076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.972376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.972410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.972693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.972725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.973006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.973040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.973237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.973269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.973522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.973556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.973856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.973889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.974160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.974194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.974472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.974505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.974796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.974829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.975029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.975062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.975300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.975332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.975465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.975497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.975749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.975781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.718 qpair failed and we were unable to recover it. 00:28:18.718 [2024-11-19 10:55:25.975993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.718 [2024-11-19 10:55:25.976026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.976292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.976325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.976622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.976654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.976928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.976988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.977235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.977267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.977592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.977624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.977903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.977935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.978142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.978176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.978425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.978458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.978757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.978790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.979068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.979109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.979387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.979419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.979560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.979591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.979777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.979811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.979992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.980025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.980232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.980265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.980466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.980498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.980774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.980807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.981009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.981042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.981320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.981352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.981545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.981576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.981848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.981882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.982150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.982182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.982398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.982431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.982708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.982741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.983021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.983055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.983338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.983370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.983652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.983685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.983973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.984006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.984120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.984154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.984349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.984382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.984608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.984642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.984944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.984994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.985275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.985308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.985606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.985638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.985907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.985941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.986240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.986272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.986540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.986579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.719 [2024-11-19 10:55:25.986870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.719 [2024-11-19 10:55:25.986902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.719 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.987146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.987180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.987448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.987480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.987697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.987730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.988006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.988040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.988293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.988326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.988480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.988512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.988787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.988820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.989102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.989135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.989397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.989430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.989682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.989713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.989990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.990024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.990278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.990311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.990534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.990567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.990838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.990870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.991167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.991202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.991469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.991501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.991799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.991831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.992124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.992158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.992421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.992454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.992663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.992695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.992961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.992995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.993292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.993325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.993586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.993619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.993875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.993908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.994111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.994144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.994395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.994428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.994775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.994808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.995080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.995113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.995325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.995358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.995478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.995510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.995695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.995727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.995850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.995881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.996131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.996164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.996441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.996473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.996674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.996707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.996970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.997005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.997256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.997288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.720 [2024-11-19 10:55:25.997552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.720 [2024-11-19 10:55:25.997584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.720 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.997878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:25.997911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.998083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:25.998140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.998350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:25.998383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.998653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:25.998685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.998894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:25.998927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.999118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:25.999151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.999351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:25.999383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.999656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:25.999688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:25.999975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.000009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.000288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.000320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.000602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.000636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.000919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.000968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.001258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.001291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.001549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.001581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.001865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.001897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.002196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.002231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.002432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.002465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.002739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.002771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.003056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.003091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.003345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.003377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.003564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.003597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.003846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.003878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.004135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.004169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.004442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.004476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.004755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.004788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.004898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.004930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.005227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.005260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.005453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.005486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.005704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.005741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.005942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.005986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.006139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.006171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.006357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.006389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.006681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.006712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.006969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.007003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.007263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.007296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.007593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.007625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.007756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.721 [2024-11-19 10:55:26.007787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.721 qpair failed and we were unable to recover it. 00:28:18.721 [2024-11-19 10:55:26.007971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.008004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.008289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.008321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.008597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.008629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.008906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.008938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.009152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.009186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.009335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.009367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.009558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.009591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.009861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.009892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.010102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.010137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.010415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.010447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.010720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.010751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.010968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.011001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.011188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.011220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.011431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.011462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.011712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.011744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.011992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.012026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.012326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.012358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.012561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.012593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.012789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.012826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.013133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.013166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.013415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.013447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.013645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.013677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.013972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.014005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.014281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.014313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.014521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.014553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.014874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.014906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.015208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.015241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.015536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.015568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.015841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.015873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.016064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.016097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.016299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.016332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.016611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.016643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.016928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.016970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.017242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.017274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.017559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.017592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.017877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.017909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.722 qpair failed and we were unable to recover it. 00:28:18.722 [2024-11-19 10:55:26.018192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.722 [2024-11-19 10:55:26.018225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.018453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.018485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.018710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.018741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.019012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.019046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.019368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.019401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.019606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.019637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.019856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.019888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.020099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.020133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.020401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.020433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.020734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.020772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.021034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.021068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.021286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.021318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.021546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.021578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.021757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.021788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.022066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.022100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.022374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.022405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.022695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.022726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.022926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.022966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.023246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.023278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.023473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.023504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.023765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.023797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.024100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.024134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.024402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.024433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.024717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.024750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.025034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.025067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.025209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.025242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.025443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.025475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.025749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.025780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.025901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.025932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.026099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.026132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.026383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.026414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.026717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.026749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.027023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.027057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.027339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.027370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.027621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.027653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.723 [2024-11-19 10:55:26.027907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.723 [2024-11-19 10:55:26.027938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.723 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.028200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.028233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.028535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.028568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.028768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.028801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.028942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.029002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.029194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.029226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.029428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.029460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.029659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.029691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.029941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.029986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.030167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.030198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.030505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.030537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.030815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.030848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.031147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.031181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.031380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.031412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.031628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.031660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.031969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.032012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.032299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.032333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.032610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.032643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.032918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.032967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.033243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.033275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.033573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.033605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.033876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.033909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.034111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.034144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.034406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.034438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.034738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.034770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.035038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.035071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.035371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.035403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.035606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.035639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.035914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.035966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.036236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.036269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.036472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.036503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.036775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.036808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.037008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.037042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.037305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.037338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.037545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.037577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.037786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.037819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.038098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.038132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.038240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.038272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.724 [2024-11-19 10:55:26.038456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.724 [2024-11-19 10:55:26.038487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.724 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.038678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.038711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.038943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.038986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.039179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.039212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.039417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.039455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.039733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.039765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.039973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.040007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.040190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.040223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.040475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.040507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.040619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.040651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.040938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.040989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.041192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.041226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.041500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.041532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.041721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.041753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.042034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.042067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.042329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.042363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.042654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.042686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.042908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.042940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.043223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.043257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.043536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.043569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.043853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.043886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.044087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.044122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.044307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.044339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.044623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.044655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.044923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.044965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.045222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.045255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.045508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.045540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.045789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.045823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.046127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.046161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.046374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.046406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.046679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.046711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.046968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.047007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.047204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.047237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.047512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.047545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.047794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.047825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.048096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.048129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.048418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.048449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.048745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.048776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.725 [2024-11-19 10:55:26.049026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.725 [2024-11-19 10:55:26.049059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.725 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.049254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.049287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.049543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.049574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.049875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.049908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.050167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.050201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.050505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.050536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.050797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.050828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.051087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.051121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.051420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.051452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.051748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.051781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.052051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.052084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.052379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.052411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.052634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.052666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.052940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.052981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.053168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.053199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.053464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.053495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.053746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.053779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.054078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.054111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.054305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.054337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.054534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.054566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.054838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.054870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.055152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.055186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.055472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.055503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.055789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.055822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.056021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.056054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.056306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.056338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.056636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.056668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.056939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.056992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.057172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.057203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.057480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.057513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.057694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.057725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.057946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.057988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.058207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.058240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.058458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.058490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.058696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.058728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.059005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.059039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.059240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.059272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.059464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.059496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.726 [2024-11-19 10:55:26.059676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.726 [2024-11-19 10:55:26.059708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.726 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.059981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.060015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.060267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.060300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.060562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.060595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.060846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.060879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.061132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.061166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.061466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.061499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.061697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.061728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.061929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.061982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.062247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.062281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.062564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.062597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.062816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.062848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.063041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.063075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.063351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.063383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.063668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.063701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.063977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.064011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.064208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.064241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.064450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.064482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.064686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.064719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.064912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.064943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.065209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.065242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.065518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.065550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.065812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.065844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.065989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.066028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.066282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.066315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.066471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.066503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.066777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.066810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.067106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.067139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.067411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.067443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.067650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.067683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.067971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.068004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.068282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.068315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.068594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.068627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.068908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.068940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.727 qpair failed and we were unable to recover it. 00:28:18.727 [2024-11-19 10:55:26.069223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.727 [2024-11-19 10:55:26.069257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.069473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.069506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.069707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.069740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.069936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.069980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.070164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.070196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.070387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.070420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.070689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.070720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.070898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.070934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.071158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.071190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.071468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.071499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.071679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.071712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.071894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.071926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.072116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.072150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.072346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.072378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.072656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.072687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.072895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.072927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.073191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.073231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.073528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.073560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.073776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.073807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.074081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.074116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.074323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.074356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.074627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.074660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.074958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.074992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.075209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.075242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.075494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.075526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.075736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.075769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.075918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.075959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.076264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.076297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.076572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.076604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.076794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.076826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.077091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.077126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.077398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.077431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.077684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.077716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.078015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.078050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.078257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.078289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.078563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.078595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.078845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.078877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.079073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.079108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.728 qpair failed and we were unable to recover it. 00:28:18.728 [2024-11-19 10:55:26.079366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.728 [2024-11-19 10:55:26.079397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.079698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.079731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.080014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.080048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.080332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.080365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.080647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.080680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.080992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.081033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.081233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.081265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.081569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.081601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.081867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.081899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.082189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.082223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.082501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.082534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.082731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.082764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.083029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.083063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.083263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.083295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.083479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.083511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.083760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.083793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.083934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.083976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.084170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.084203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.084422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.084453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.084715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.084749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.085046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.085080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.085263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.085296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.085572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.085605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.085859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.085892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.086201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.086234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.086493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.086525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.086714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.086745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.087025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.087060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.087254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.087286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.087559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.087591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.087795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.087827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.088026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.088060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.088336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.088367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.088607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.088640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.088913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.088945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.089099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.089133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.089354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.089387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.089660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.729 [2024-11-19 10:55:26.089691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.729 qpair failed and we were unable to recover it. 00:28:18.729 [2024-11-19 10:55:26.089968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.090000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.090283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.090315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.090597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.090629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.090884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.090916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.091199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.091233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.091423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.091454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.091703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.091736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.091989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.092023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.092308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.092341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.092619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.092651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.092918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.092966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.093225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.093259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.093511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.093544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.093794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.093826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.094127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.094161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.094453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.094486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.094712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.094743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.094970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.095004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.095256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.095288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.095542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.095574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.095769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.095802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.096079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.096113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.096309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.096342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.096597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.096629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.096879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.096912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.097194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.097229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.097480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.097512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.097774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.097807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.098114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.098147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.098344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.098377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.098625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.098657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.098964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.098998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.099247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.099280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.099587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.099619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.099823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.099856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.100138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.100178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.100385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.100418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.100639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.100670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.730 qpair failed and we were unable to recover it. 00:28:18.730 [2024-11-19 10:55:26.100943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.730 [2024-11-19 10:55:26.101004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.101203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.101236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.101359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.101391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.101641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.101672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.101867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.101900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.102185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.102218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.102519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.102552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.102774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.102806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.103076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.103110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.103292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.103325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.103600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.103632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.103923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.103966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.104184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.104216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.104524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.104557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.104813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.104845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.105147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.105182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.105385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.105418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.105611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.105642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.105917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.105957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.106261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.106293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.106551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.106584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.106789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.106821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.107019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.107053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.107329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.107361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.107614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.107653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.107907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.107939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.108231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.108264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.108539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.108570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.108761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.108794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.109042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.109076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.109281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.109314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.109562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.109594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.109866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.109899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.110185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.110220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.110516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.110548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.110822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.110855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.111148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.111181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.731 qpair failed and we were unable to recover it. 00:28:18.731 [2024-11-19 10:55:26.111454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.731 [2024-11-19 10:55:26.111486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.111699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.111732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.111977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.112011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.112290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.112323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.112454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.112486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.112760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.112793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.113042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.113076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.113280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.113313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.113497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.113529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.113709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.113741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.114021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.114056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.114267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.114300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.114520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.114553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.114854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.114887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.115087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.115122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.115381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.115414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.115706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.115738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.116035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.116070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.116298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.116331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.116524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.116556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.116829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.116862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.117066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.117101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.117283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.117315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.117537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.117569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.117840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.117873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.118141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.118174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.118456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.118490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.118740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.118773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.119031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.119066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.119319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.119351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.119552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.119585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.732 [2024-11-19 10:55:26.119879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.732 [2024-11-19 10:55:26.119911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.732 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.120136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.120170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.120450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.120484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.120698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.120729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.120962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.120996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.121183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.121214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.121513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.121546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.121796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.121828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.122081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.122115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.122311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.122343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.122554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.122587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.122867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.122899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.123107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.123141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.123341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.123373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.123580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.123612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.123874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.123906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.124099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.124133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.124345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.124377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.124654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.124687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.124980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.125014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.125228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.125259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.125390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.125422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.125697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.125729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.125874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.125906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.126210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.126250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.126504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.126537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.126744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.126776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.126971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.127006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.127207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.127239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.127491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.127524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.127740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.127772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.127961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.127995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.128245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.128276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.128577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.128609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.128899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.128932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.129241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.129275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.129531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.129562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.129822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.733 [2024-11-19 10:55:26.129855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.733 qpair failed and we were unable to recover it. 00:28:18.733 [2024-11-19 10:55:26.130162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.130196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.130400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.130433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.130686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.130717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.130919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.130963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.131113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.131145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.131289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.131321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.131574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.131605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.131790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.131822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.132077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.132111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.132306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.132338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.132587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.132619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.132820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.133127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.133160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.133419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.133458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.133733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.133764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.133972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.134006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.134287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.134320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.134520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.134552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.134743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.134775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.135055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.135089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.135289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.135321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.135514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.135546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.135755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.135786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.136038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.136073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.136277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.136309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:18.734 [2024-11-19 10:55:26.136501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.734 [2024-11-19 10:55:26.136533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:18.734 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.136784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.136817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.137119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.137153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.137438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.137470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.137722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.137754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.137903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.137935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.138206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.138239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.138422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.138454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.138706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.138739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.138851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.138882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.139178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.139212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.139489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.139522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.139730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.139762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.139913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.139944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.140230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.140264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.140563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.140601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.140744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.140777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.141055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.141089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.141366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.141399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.141682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.141714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.016 [2024-11-19 10:55:26.142000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.016 [2024-11-19 10:55:26.142034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.016 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.142218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.142250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.142449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.142482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.142689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.142720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.142996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.143030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.143307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.143339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.143632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.143665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.143941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.143985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.144262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.144295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.144574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.144607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.144894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.144926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.145212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.145247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.145478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.145510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.145732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.145764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.146043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.146078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.146361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.146392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.146616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.146648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.146866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.146898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.147189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.147222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.147499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.147530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.147824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.147857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.148083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.148115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.148311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.148344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.148626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.148657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.148795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.148828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.149027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.149060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.149331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.149363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.149622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.149654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.149913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.149945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.150155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.150186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.150409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.150441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.150622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.150653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.150921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.150964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.151173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.151205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.151512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.151545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.151748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.151780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.152062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.152097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.152348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.152380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.017 [2024-11-19 10:55:26.152687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-19 10:55:26.152720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.017 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.153003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.153037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.153238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.153271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.153543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.153574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.153855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.153888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.154173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.154206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.154487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.154520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.154797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.154828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.155116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.155149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.155407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.155439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.155640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.155672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.155970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.156004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.156237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.156269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.156468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.156500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.156723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.156755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.157032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.157066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.157353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.157387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.157639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.157670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.157988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.158022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.158231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.158264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.158467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.158499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.158771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.158803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.159000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.159035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.159295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.159327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.159605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.159638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.159830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.159869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.160171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.160205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.160464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.160496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.160774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.160807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.161024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.161057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.161308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.161341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.161548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.161580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.161851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.161884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.162135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.162168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.162449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.162482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.162685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.162718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.162922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.162963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.163157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.163189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.018 [2024-11-19 10:55:26.163467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-19 10:55:26.163500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.018 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.163782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.163816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.164095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.164130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.164395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.164427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.164711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.164744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.165000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.165034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.165248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.165282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.165538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.165570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.165701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.165734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.165916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.165956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.166235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.166268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.166528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.166563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.166833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.166865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.167064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.167097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.167281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.167320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.167597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.167628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.167902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.167934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.168231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.168264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.168536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.168568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.168787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.168819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.169097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.169132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.169351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.169382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.169636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.169668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.169934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.169978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.170263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.170296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.170506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.170539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.170817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.170849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.171158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.171195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.171483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.171517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.171736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.171768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.171992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.172026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.172276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.172308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.172619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.172652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.172855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.172886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.173195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.173229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.173418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.173450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.173630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.173663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.173937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.173980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.019 [2024-11-19 10:55:26.174246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-19 10:55:26.174279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.019 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.174568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.174601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.174827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.174860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.175040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.175074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.175292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.175326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.175580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.175613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.175819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.175852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.176132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.176165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.176418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.176451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.176599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.176631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.176906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.176939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.177179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.177212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.177485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.177519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.177710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.177741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.178007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.178041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.178247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.178280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.178538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.178571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.178881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.178915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.179148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.179182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.179458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.179491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.179696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.179728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.179985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.180019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.180318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.180350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.180557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.180590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.180869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.180901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.181189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.181223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.181501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.181534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.181822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.181854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.182068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.182102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.182376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.182409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.182699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.182732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.182963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.182997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.183270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.183303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.183559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.183591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.183840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.183873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.184095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.184129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.184395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.184428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.184706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.020 [2024-11-19 10:55:26.184737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.020 qpair failed and we were unable to recover it. 00:28:19.020 [2024-11-19 10:55:26.185027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.185062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.185304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.185336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.185641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.185673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.185889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.185921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.186182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.186216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.186490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.186521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.186775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.186814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.187117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.187151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.187435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.187468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.187749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.187781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.188092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.188128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.188379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.188411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.188670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.188703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.189000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.189033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.189333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.189366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.189584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.189616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.189895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.189928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.190154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.190187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.190379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.190412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.190688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.190720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.190922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.190963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.191171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.191203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.191481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.191514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.191711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.191744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.191922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.191965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.192238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.192270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.192563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.192596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.192872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.192904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.193167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.193201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.193406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.193440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.193662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.193695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.193890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.021 [2024-11-19 10:55:26.193923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.021 qpair failed and we were unable to recover it. 00:28:19.021 [2024-11-19 10:55:26.194072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.194106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.194387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.194425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.194745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.194779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.195075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.195109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.195306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.195339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.195544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.195576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.195827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.195859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.196116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.196150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.196449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.196480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.196768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.196801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.196936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.196983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.197233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.197266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.197472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.197503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.197781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.197814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.198013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.198046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.198326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.198358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.198575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.198606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.198795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.198828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.199110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.199143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.199336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.199368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.199554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.199586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.199869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.199901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.200058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.200091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.200373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.200406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.200678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.200709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.200929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.200980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.201199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.201231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.201468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.201502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.201720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.201763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.202029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.202064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.202377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.202411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.202625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.202659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.202929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.202975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.203277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.203308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.203508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.203540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.203818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.204049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.204082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.204358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.204392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.022 qpair failed and we were unable to recover it. 00:28:19.022 [2024-11-19 10:55:26.204676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.022 [2024-11-19 10:55:26.204708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.204921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.204963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.205147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.205179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.205432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.205465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.205687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.205720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.205993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.206050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.206317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.206351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.206557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.206591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.206769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.206801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.207004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.207039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.207323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.207354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.207495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.207528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.207802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.207834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.208109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.208143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.208410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.208442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.208724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.208757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.209010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.209043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.209303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.209337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.209596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.209630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.209965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.209999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.210275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.210307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.210584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.210618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.210829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.210861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.211190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.211224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.211423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.211455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.211680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.211712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.211985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.212019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.212313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.212345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.212573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.212605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.212878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.212911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.213205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.213237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.213465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.213498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.213750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.213782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.213975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.214009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.214226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.214260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.214457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.214490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.214767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.214800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.214988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.215024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.023 [2024-11-19 10:55:26.215304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.023 [2024-11-19 10:55:26.215338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.023 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.215619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.215651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.215929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.215976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.216219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.216251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.216537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.216569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.216774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.216807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.217080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.217114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.217374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.217407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.217686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.217719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.218003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.218036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.218317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.218349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.218632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.218664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.218962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.218997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.219179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.219211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.219485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.219518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.219713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.219746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.220002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.220036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.220249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.220282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.220548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.220581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.220840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.220872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.221064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.221104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.221377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.221409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.221605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.221638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.221753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.221784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.222073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.222106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.222403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.222435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.222567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.222599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.222851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.222882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.223179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.223213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.223435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.223467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.223734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.223767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.224070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.224104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.224303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.224338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.224532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.224565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.224844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.224877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.225022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.225057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.225194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.225227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.225415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.225448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.225585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.024 [2024-11-19 10:55:26.225617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.024 qpair failed and we were unable to recover it. 00:28:19.024 [2024-11-19 10:55:26.225820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.225851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.226053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.226087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.226291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.226323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.226626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.226658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.226936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.226978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.227234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.227267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.227553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.227587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.227908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.227940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.228158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.228196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.228408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.228441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.228636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.228667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.228997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.229033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.229309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.229342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.229600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.229634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.229937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.229985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.230188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.230222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.230447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.230478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.230674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.230707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.230983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.231016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.231223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.231256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.231533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.231565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.231748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.231780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.231968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.232003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.232187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.232220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.232520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.232552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.232833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.232868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.233092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.233129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.233408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.233441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.233697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.233728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.234007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.234041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.234313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.234345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.234544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.234579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.234843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.234877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.025 qpair failed and we were unable to recover it. 00:28:19.025 [2024-11-19 10:55:26.235059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.025 [2024-11-19 10:55:26.235092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.235248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.235280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.235557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.235589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.235797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.235829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.236104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.236138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.236393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.236424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.236648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.236681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.236934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.236993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.237301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.237333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.237536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.237568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.237837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.237869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.238176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.238211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.238468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.238500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.238717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.238750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.238993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.239026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.239302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.239335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.239640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.239673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.239863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.239895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.240117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.240151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.240351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.240383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.240629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.240661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.240944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.241012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.241231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.241264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.241513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.241547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.241815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.241847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.241980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.242015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.242211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.242242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.242397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.242429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.242568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.242599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.242723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.242756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.243039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.243074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.243266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.243299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.243559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.243592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.243867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.243900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.244101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.244134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.244357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.244389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.244653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.244687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.244966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.245000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.026 [2024-11-19 10:55:26.245230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.026 [2024-11-19 10:55:26.245263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.026 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.245468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.245501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.245693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.245726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.246005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.246038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.246233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.246266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.246523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.246561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.246863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.246896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.247108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.247141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.247344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.247377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.247502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.247534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.247785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.247817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.248021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.248053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.248258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.248291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.248584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.248616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.248849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.248882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.249085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.249118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.249415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.249447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.249585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.249617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.249869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.249902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.250223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.250256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.250522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.250556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.250747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.250779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.251064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.251098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.251374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.251405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.251630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.251663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.251842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.251873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.252013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.252047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.252329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.252360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.252641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.252673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.252963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.252997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.253271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.253303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.253587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.253618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.253909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.253964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.254239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.254274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.254537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.254570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.254869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.254902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.255136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.255170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.255317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.255350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.255636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.027 [2024-11-19 10:55:26.255667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.027 qpair failed and we were unable to recover it. 00:28:19.027 [2024-11-19 10:55:26.255866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.255899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.256164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.256196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.256372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.256405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.256656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.256687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.256990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.257025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.257295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.257326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.257575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.257608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.257873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.257906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.258205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.258239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.258531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.258563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.258834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.258866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.259093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.259126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.259379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.259411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.259680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.259710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.259999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.260033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.260312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.260345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.260622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.260654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.260906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.260938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.261176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.261209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.261446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.261478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.261730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.261768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.262070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.262104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.262369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.262402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.262620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.262652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.262926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.262968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.263178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.263210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.263464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.263497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.263609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.263641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.263788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.263821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.264016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.264049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.264331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.264364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.264645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.264676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.264933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.264990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.265243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.265275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.265585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.265618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.265867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.265898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.266102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.266135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.266355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.028 [2024-11-19 10:55:26.266387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.028 qpair failed and we were unable to recover it. 00:28:19.028 [2024-11-19 10:55:26.266690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.266723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.266922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.266965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.267242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.267274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.267531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.267563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.267814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.267847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.268147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.268181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.268466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.268499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.268780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.268812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.269079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.269113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.269303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.269335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.269555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.269587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.269790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.269821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.270108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.270143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.270345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.270377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.270656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.270688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.270892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.270924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.271129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.271166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.271449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.271481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.271675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.271707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.271973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.272007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.272256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.272288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.272564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.272597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.272791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.272823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.273045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.273086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.273385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.273418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.273603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.273635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.273832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.273866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.273981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.274015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.274209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.274243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.274363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.274394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.274579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.274612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.274830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.274861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.275136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.275171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.275443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.275475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.275725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.275758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.276059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.276093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.276312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.276345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.276551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.029 [2024-11-19 10:55:26.276584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.029 qpair failed and we were unable to recover it. 00:28:19.029 [2024-11-19 10:55:26.276781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.276814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.276997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.277031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.277225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.277257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.277387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.277419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.277666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.277698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.277880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.277913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.278201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.278235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.278511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.278543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.278688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.278720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.278939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.278983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.279255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.279287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.279431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.279465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.279693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.279730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.279931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.279975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.280260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.280293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.280501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.280534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.280787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.280820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.280945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.280998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.281219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.281252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.281502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.281535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.281659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.281690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.281963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.281998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.282206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.282238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.282456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.282489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.282776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.282809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.283007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.283042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.283193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.283225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.283479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.283512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.283804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.283835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.284085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.284120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.284316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.030 [2024-11-19 10:55:26.284348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.030 qpair failed and we were unable to recover it. 00:28:19.030 [2024-11-19 10:55:26.284530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.284564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.284704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.284736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.285012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.285046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.285239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.285271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.285470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.285502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.285772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.285804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.285922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.285966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.286157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.286188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.286484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.286523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.286717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.286749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.286869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.286901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.287095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.287128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.287422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.287454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.287650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.287682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.287965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.287999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.288191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.288224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.288437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.288468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.288667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.288700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.288906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.288938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.289257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.289290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.289466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.289499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.289713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.289744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.289942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.289991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.290183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.290216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.290433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.290465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.290663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.290696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.290889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.290921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.291129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.291162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.291369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.291403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.291623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.291657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.291913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.291945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.292142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.292174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.292318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.292351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.292477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.292509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.292713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.292746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.292891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.292924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.293159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.293193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.293395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.031 [2024-11-19 10:55:26.293429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.031 qpair failed and we were unable to recover it. 00:28:19.031 [2024-11-19 10:55:26.293730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.293762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.293909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.293941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.294171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.294204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.294488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.294521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.294642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.294674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.294808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.294840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.295025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.295059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.295328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.295362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.295559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.295590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.295773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.295807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.295934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.295977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.296187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.296220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.296353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.296385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.296507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.296542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.296658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.296691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.296837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.296871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.296978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.297012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.297134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.297166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.297288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.297322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.297570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.297602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.297877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.297910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.298196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.298229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.298416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.298449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.298579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.298611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.298826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.298859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.299056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.299090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.299300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.299333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.299551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.299583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.299732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.299764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.299959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.299993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.300114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.300147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.300253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.300285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.300551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.300583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.300858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.300889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.301169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.301204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.301345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.301376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.301625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.301658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.301838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.032 [2024-11-19 10:55:26.301870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.032 qpair failed and we were unable to recover it. 00:28:19.032 [2024-11-19 10:55:26.302058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.302099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.302368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.302400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.302585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.302617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.302879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.302911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.303085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.303119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.303254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.303286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.303503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.303536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.303651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.303683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.303864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.303896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.304055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.304089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.304198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.304229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.304430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.304463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.304692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.304725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.304927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.304972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.305241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.305274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.305543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.305575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.305714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.305746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.305890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.305922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.306137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.306170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.306347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.306379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.306505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.306536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.306655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.306688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.306935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.306983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.307112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.307143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.307393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.307427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.307626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.307658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.307865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.307898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.308109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.308155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.308360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.308393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.308599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.308631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.308772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.308803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.308992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.309027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.309292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.309324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.309450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.309483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.309661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.309693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.309808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.309840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.310034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.310067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.310290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.033 [2024-11-19 10:55:26.310325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.033 qpair failed and we were unable to recover it. 00:28:19.033 [2024-11-19 10:55:26.310440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.310474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.310683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.310716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.310922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.310965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.311163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.311196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.311326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.311358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.311493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.311525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.311804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.311837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.312023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.312056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.312235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.312268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.312457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.312490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.312627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.312660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.312906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.312938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.313063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.313095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.313221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.313253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.313389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.313422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.313532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.313564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.313758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.313797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.313993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.314027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.314162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.314195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.314313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.314344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.314465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.314498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.314640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.314672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.314918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.314960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.315088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.315119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.315277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.315309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.315432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.315464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.315577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.315610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.315856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.315887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.316002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.316037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.316216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.316247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.316380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.316414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.316615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.316647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.316904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.316937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.317069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.317101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.317290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.034 [2024-11-19 10:55:26.317323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.034 qpair failed and we were unable to recover it. 00:28:19.034 [2024-11-19 10:55:26.317509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.317542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.317730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.317762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.317895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.317928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.318065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.318098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.318215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.318248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.318448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.318480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.318589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.318622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.318744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.318776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.319043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.319078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.319268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.319301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.319479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.319512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.319709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.319740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.319988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.320022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.320212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.320244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.320426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.320458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.320565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.320597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.320842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.320875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.321070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.321103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.321244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.321277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.321458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.321489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.321597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.321630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.321747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.321779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.321915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.321965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.322093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.322125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.322233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.322265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.322472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.322505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.322715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.322748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.322886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.322918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.323042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.323076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.323185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.323216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.323320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.323354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.323491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.323523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.323636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.323669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.323776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.323808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.323926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.323969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.324147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.324179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.324294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.035 [2024-11-19 10:55:26.324326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.035 qpair failed and we were unable to recover it. 00:28:19.035 [2024-11-19 10:55:26.324536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.324568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.324761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.324793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.324983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.325018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.325204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.325236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.325431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.325464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.325638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.325670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.325814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.325846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.326044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.326078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.326191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.326223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.326422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.326453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.326555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.326588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.326776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.326807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.326986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.327026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.327289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.327321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.327468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.327500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.327678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.327709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.327829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.327861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.327992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.328024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.328135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.328168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.328276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.328307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.328499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.328532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.328732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.328765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.328884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.328916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.329111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.329145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.329396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.329430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.329540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.329572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.329767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.330046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.330081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.330196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.330228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.330356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.330388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.330651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.330683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.330855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.330888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.331028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.331061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.331334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.331368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.331633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.331665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.331772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.331804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.331997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.332031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.332180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.036 [2024-11-19 10:55:26.332216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.036 qpair failed and we were unable to recover it. 00:28:19.036 [2024-11-19 10:55:26.332337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.332370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.332542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.332580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.332799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.332831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.333106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.333139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.333333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.333366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.333557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.333601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.333725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.333758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.333876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.333907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.334068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.334103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.334236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.334268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.334511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.334543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.334815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.334848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.334971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.335005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.335272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.335305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.335496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.335528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.335715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.335748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.335940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.335983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.336178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.336212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.336429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.336461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.336576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.336609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.336736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.336769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.336983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.337017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.337153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.337185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.337432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.337465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.337650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.337682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.337795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.337828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.337972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.338006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.338248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.338281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.338550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.338581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.338731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.338764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.338877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.338909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.339116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.339150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.339352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.339384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.339566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.339598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.339780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.339811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.339928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.339984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.340174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.340206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.340342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.340375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.037 qpair failed and we were unable to recover it. 00:28:19.037 [2024-11-19 10:55:26.340498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.037 [2024-11-19 10:55:26.340530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.340798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.340831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.340966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.341000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.341122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.341154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.341367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.341400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.341515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.341547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.341737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.341768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.342050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.342085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.342202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.342233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.342359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.342391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.342644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.342794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.342826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.343023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.343057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.343249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.343282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.343465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.343498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.343714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.343746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.344019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.344053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.344198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.344229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.344361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.344394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.344580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.344612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.344859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.344891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.345088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.345122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.345299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.345331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.345453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.345485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.345729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.345761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.345891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.345924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.346121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.346154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.346400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.346433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.346617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.346648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.346775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.346808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.346935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.346981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.347096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.347133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.347236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.347276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.347397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.347429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.347614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.347645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.347918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.347963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.348111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.348143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.348329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.348361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.038 [2024-11-19 10:55:26.348559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.038 [2024-11-19 10:55:26.348590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.038 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.348878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.348911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.349031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.349064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.349311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.349344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.349531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.349563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.349684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.349716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.349905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.349937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.350125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.350158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.350279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.350313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.350439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.350471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.350720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.350752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.351042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.351075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.351208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.351240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.351357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.351389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.351569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.351601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.351793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.351825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.351998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.352031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.352158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.352189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.352368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.352400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.352588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.352620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.352722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.352758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.352961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.352995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.353181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.353213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.353334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.353366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.353496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.353528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.353770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.353802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.353999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.354032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.354134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.354166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.354285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.354316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.354509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.354541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.354743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.354774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.354902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.354934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.355056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.355088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.355355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.355388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.039 qpair failed and we were unable to recover it. 00:28:19.039 [2024-11-19 10:55:26.355520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.039 [2024-11-19 10:55:26.355552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.355763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.355795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.355994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.356028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.356162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.356194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.356360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.356391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.356639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.356672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.356843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.356874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.357051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.357085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.357215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.357247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.357415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.357447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.357685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.357717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.357899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.357931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.358077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.358109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.358315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.358353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.358593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.358625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.358729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.358760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.358931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.358974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.359148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.359179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.359415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.359447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.359563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.359595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.359779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.359811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.359934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.359977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.360077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.360109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.360301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.360333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.360548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.360579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.360749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.360782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.360971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.361006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.361224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.361256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.361444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.361475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.361642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.361675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.361859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.361890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.362171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.362204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.362344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.362376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.362545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.362577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.362764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.362795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.363038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.363071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.363253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.363284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.363569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.363608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.040 [2024-11-19 10:55:26.363796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.040 [2024-11-19 10:55:26.363827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.040 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.364040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.364075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.364195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.364244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.364353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.364385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.364618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.364649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.364891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.364923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.365105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.365137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.365376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.365408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.365668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.365700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.365796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.365828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.365994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.366027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.366270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.366302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.366564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.366595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.366787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.366819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.367058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.367090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.367267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.367300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.367484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.367520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.367692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.367723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.367844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.367876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.368114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.368147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.368332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.368364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.368571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.368603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.368771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.368801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.368919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.368961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.369154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.369187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.369285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.369316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.369486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.369518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.369720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.369752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.370000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.370034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.370231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.370263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.370381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.370413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.370648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.370679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.370867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.370899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.371043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.371075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.371271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.371305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.371487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.371519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.371690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.371721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.371832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.371864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.371988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.041 [2024-11-19 10:55:26.372021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.041 qpair failed and we were unable to recover it. 00:28:19.041 [2024-11-19 10:55:26.372198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.372230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.372489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.372520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.372707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.372741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.372922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.372963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.373137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.373176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.373424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.373456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.373720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.373752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.373982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.374016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.374192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.374225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.374416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.374448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.374617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.374648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.374830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.374862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.374994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.375028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.375267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.375299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.375402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.375434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.375621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.375652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.375786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.375818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.375937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.375979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.376093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.376126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.376334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.376367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.376550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.376580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.376770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.376802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.377068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.377102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.377366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.377397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.377584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.377615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.377814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.377847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.377962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.377996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.378174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.378206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.378397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.378428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.378555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.378587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.378866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.378897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.379100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.379140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.379337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.379368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.379553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.379585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.379762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.379794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.380040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.380074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.380194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.380226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.380415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.042 [2024-11-19 10:55:26.380447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.042 qpair failed and we were unable to recover it. 00:28:19.042 [2024-11-19 10:55:26.380556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.380587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.380856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.380888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.381086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.381121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.381306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.381337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.381521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.381553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.381818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.381850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.382034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.382068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.382322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.382354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.382475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.382507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.382701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.382733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.382913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.382945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.383069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.383101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.383204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.383236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.383494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.383525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.383710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.383743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.383929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.383973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.384235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.384267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.384459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.384490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.384732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.384764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.384960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.384994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.385263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.385296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.385499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.385532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.385759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.385791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.385979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.386013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.386222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.386254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.386504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.386536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.386652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.386683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.386787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.386819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.386958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.386992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.387115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.387147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.387266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.387297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.387535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.387567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.387825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.387857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.388028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.388061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.388255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d6af0 is same with the state(6) to be set 00:28:19.043 [2024-11-19 10:55:26.388482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.388553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.388761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.388796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.389087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.389122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.389230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.043 [2024-11-19 10:55:26.389262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.043 qpair failed and we were unable to recover it. 00:28:19.043 [2024-11-19 10:55:26.389430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.389462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.389675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.389707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.389883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.389913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.390072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.390104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.390241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.390272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.390445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.390476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.390722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.390754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.390939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.390984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.391119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.391149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.391279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.391310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.391494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.391526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.391740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.391772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.391958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.391990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.392173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.392205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.392384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.392415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.392602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.392634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.392742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.392772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.392890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.392921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.393106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.393177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.393371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.393408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.393641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.393673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.393800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.393832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.394034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.394082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.394197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.394230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.394418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.394449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.394726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.394758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.394934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.394982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.395097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.395128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.395335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.395367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.395537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.395570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.395756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.395787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.395904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.395936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.396194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.044 [2024-11-19 10:55:26.396227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.044 qpair failed and we were unable to recover it. 00:28:19.044 [2024-11-19 10:55:26.396405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.396437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.396672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.396704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.396970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.397004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.397200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.397231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.397434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.397466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.397739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.397770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.397967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.398000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.398203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.398235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.398373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.398403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.398574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.398607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.398777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.398809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.399001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.399034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.399216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.399247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.399416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.399448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.399630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.399661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.399775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.399807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.399917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.399957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.400129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.400161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.400371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.400402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.400575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.400608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.400785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.400815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.401020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.401053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.401232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.401264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.401448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.401480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.401718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.401751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.401923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.401964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.402145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.402177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.402360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.402392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.402570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.402602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.402846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.402884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.403055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.403099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.403360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.403392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.403583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.403614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.403816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.403848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.404058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.404090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.404326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.404359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.404537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.404569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.045 [2024-11-19 10:55:26.404756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.045 [2024-11-19 10:55:26.404787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.045 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.404966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.405000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.405118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.405151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.405274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.405305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.405500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.405532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.405716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.405748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.405858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.405890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.406019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.406053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.406246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.406279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.406454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.406486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.406655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.406686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.406928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.406971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.407105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.407136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.407306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.407338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.407443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.407474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.407646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.407677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.407862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.407894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.408024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.408058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.408175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.408206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.408315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.408347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.408583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.408615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.408876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.408908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.409106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.409138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.409265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.409296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.409483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.409514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.409778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.409810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.409934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.409987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.410232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.410264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.410378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.410408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.410604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.410636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.410745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.410777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.410957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.410993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.411106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.411143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.411380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.411412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.411514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.411546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.411746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.411777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.411893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.411924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.412108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.412140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.046 [2024-11-19 10:55:26.412254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.046 [2024-11-19 10:55:26.412286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.046 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.412538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.412569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.412746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.412777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.412967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.413000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.413172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.413203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.413472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.413504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.413622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.413653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.413912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.413943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.414238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.414271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.414482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.414513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.414713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.414743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.414928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.414969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.415184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.415214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.415478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.415510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.415630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.415661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.415915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.415956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.416167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.416198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.416452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.416484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.416667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.416699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.416885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.416917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.417130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.417164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.417472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.417543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.417737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.417772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.417964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.418001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.418176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.418207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.418395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.418426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.418543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.418574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.418758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.418790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.418920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.418959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.419217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.419248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.419438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.419468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.419592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.419622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.419807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.419839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.420085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.420119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.420351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.420392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.420585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.420616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.420796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.420829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.420946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.420986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.047 qpair failed and we were unable to recover it. 00:28:19.047 [2024-11-19 10:55:26.421106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.047 [2024-11-19 10:55:26.421138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.421396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.421428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.421552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.421583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.421768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.421799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.421972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.422003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.422215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.422247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.422417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.422447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.422635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.422666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.422854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.422885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.423124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.423158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.423338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.423368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.423552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.423582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.423783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.423813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.424024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.424058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.424228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.424259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.424428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.424459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.424581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.424613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.424726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.424756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.425021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.425055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.425186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.425218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.425354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.425385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.425559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.425590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.425692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.425724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.425969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.426039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.426302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.426338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.426579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.426612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.426751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.426785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.427028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.427061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.427272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.427305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.427532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.427565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.427852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.427883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.428081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.428114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.428288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.428321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.428493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.428523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.048 qpair failed and we were unable to recover it. 00:28:19.048 [2024-11-19 10:55:26.428704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.048 [2024-11-19 10:55:26.428736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.428873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.428904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.429107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.429149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.429333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.429364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.429547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.429579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.429788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.429818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.430063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.430097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.430357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.430389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.430646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.430678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.430819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.430850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.431063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.431097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.431360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.431392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.431575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.431607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.431792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.431823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.432009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.432042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.432230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.432261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.432455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.432486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.432748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.432780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.432960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.432994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.433239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.433271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.433452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.433484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.433663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.433695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.433881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.433913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.434115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.434147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.434274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.434307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.434414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.434445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.434620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.434651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.434833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.434864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.434982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.435014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.435324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.435396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.435524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.435559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.435679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.435713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.435899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.435930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.436144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.436176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.436350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.436381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.436577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.436612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.436851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.436882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.437135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.049 [2024-11-19 10:55:26.437169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.049 qpair failed and we were unable to recover it. 00:28:19.049 [2024-11-19 10:55:26.437433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.437465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.437644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.437675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.437801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.437832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.438037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.438070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.438197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.438229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.438450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.438482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.438663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.438696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.438833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.438865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.439056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.439089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.439295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.439325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.439540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.439572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.439739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.439772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.439898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.439929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.440124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.440158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.440339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.440371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.440588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.440620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.440808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.440841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.441042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.441074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.441312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.441357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.441489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.441521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.441629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.441660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.441854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.441886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.442018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.442051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.442286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.442318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.442444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.442476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.442647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.442679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.442852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.442884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.443073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.050 [2024-11-19 10:55:26.443107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.050 qpair failed and we were unable to recover it. 00:28:19.050 [2024-11-19 10:55:26.443293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.403 [2024-11-19 10:55:26.443325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.403 qpair failed and we were unable to recover it. 00:28:19.403 [2024-11-19 10:55:26.443563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.403 [2024-11-19 10:55:26.443595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.403 qpair failed and we were unable to recover it. 00:28:19.403 [2024-11-19 10:55:26.443779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.403 [2024-11-19 10:55:26.443810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.403 qpair failed and we were unable to recover it. 00:28:19.403 [2024-11-19 10:55:26.443995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.403 [2024-11-19 10:55:26.444027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.403 qpair failed and we were unable to recover it. 00:28:19.403 [2024-11-19 10:55:26.444165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.403 [2024-11-19 10:55:26.444198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.403 qpair failed and we were unable to recover it. 00:28:19.403 [2024-11-19 10:55:26.444377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.403 [2024-11-19 10:55:26.444409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.403 qpair failed and we were unable to recover it. 00:28:19.403 [2024-11-19 10:55:26.444595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.403 [2024-11-19 10:55:26.444627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.403 qpair failed and we were unable to recover it. 00:28:19.403 [2024-11-19 10:55:26.444828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.403 [2024-11-19 10:55:26.444860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.403 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.445036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.445069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.445194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.445224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.445390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.445422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.445608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.445644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.445765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.445794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.445928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.445969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.446259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.446291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.446456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.446487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.446672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.446703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.446882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.446921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.447105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.447137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.447422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.447453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.447710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.447742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.447852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.447883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.448163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.448196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.448459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.448490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.448693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.448724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.448907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.448939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.449063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.449095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.449329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.449361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.449580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.449611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.449846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.449877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.450122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.450155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.450269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.450301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.450596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.450627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.450739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.450770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.451005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.451039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.451156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.451187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.451441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.451472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.451711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.451742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.451976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.452009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.452109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.452141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.452241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.452272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.452457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.452490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.452676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.452707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.452930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.452969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.453198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.453235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.404 [2024-11-19 10:55:26.453417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.404 [2024-11-19 10:55:26.453447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.404 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.453628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.453659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.453938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.453984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.454170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.454202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.454404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.454436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.454551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.454582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.454753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.454784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.455056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.455089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.455209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.455240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.455418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.455450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.455639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.455670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.455775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.455806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.455992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.456025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.456251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.456320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.456506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.456576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.456706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.456743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.456981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.457017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.457281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.457313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.457505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.457536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.457653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.457685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.457870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.457902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.458112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.458145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.458348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.458379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.458574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.458606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.458725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.458757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.458967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.459000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.459172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.459213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.459424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.459456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.459587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.459619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.459743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.459775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.459973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.460005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.460270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.460302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.460489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.460522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.460711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.460742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.460912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.460943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.461131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.461164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.461353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.461385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.461641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.405 [2024-11-19 10:55:26.461675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.405 qpair failed and we were unable to recover it. 00:28:19.405 [2024-11-19 10:55:26.461857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.461889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.462043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.462078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.462282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.462314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.462575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.462607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.462802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.462837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.463117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.463151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.463421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.463454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.463581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.463612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.463861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.463894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.464037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.464070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.464330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.464362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.464469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.464502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.464791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.464824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.464958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.464994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.465211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.465243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.465503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.465573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.465833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.465868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.466136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.466171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.466361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.466391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.466646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.466677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.466858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.466889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.467021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.467054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.467246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.467277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.467394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.467425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.467701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.467731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.467860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.467891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.468144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.468188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.468394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.468431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.468692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.468730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.468968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.469002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.469210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.469242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.469431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.469462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.469661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.469693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.469931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.469974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.470093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.470124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.470235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.470267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.470501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.470534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.406 [2024-11-19 10:55:26.470669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.406 [2024-11-19 10:55:26.470700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.406 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.470971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.471005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.471183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.471215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.471398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.471430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.471697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.471729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.471932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.471974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.472165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.472197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.472434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.472467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.472738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.472771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.472959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.472993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.473200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.473232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.473487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.473518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.473706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.473738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.473925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.473977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.474183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.474216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.474439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.474471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.474695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.474727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.474970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.475004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.475220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.475254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.475394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.475426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.475620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.475853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.475885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.476006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.476040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.476274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.476306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.476490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.476522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.476700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.476731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.476905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.476937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.477118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.477151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.477321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.477354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.477542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.477573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.477783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.477815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.478004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.478049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.478236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.478268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.478448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.478481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.478657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.478689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.478871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.478903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.479084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.479118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.479336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.479369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.407 qpair failed and we were unable to recover it. 00:28:19.407 [2024-11-19 10:55:26.479608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.407 [2024-11-19 10:55:26.479640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.479830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.479861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.480042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.480076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.480210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.480242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.480466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.480498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.480680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.480711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.480920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.480961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.481103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.481136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.481250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.481282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.481452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.481484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.481747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.481779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.481967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.482000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.482179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.482212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.482331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.482363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.482549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.482581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.482749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.482781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.482963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.482997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.483178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.483211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.483449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.483481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.483684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.483716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.483914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.483959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.484157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.484189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.484358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.484390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.484519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.484551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.484737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.484769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.485008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.485041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.485215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.485248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.485504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.485535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.485713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.485745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.485945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.485988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.486110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.486141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.486323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.408 [2024-11-19 10:55:26.486354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.408 qpair failed and we were unable to recover it. 00:28:19.408 [2024-11-19 10:55:26.486593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.486626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.486746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.486784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.486979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.487013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.487198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.487231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.487482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.487514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.487640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.487672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.487879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.487910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.488154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.488188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.488437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.488468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.488579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.488612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.488781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.488812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.489008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.489042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.489223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.489255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.489441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.489473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.489690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.489722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.489914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.489956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.490147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.490179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.490304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.490336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.490444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.490476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.490673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.490705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.490918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.490958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.491164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.491196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.491379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.491411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.491595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.491627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.491742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.491774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.492012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.492046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.492262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.492294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.492414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.492446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.492777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.492847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.493065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.493103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.493291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.493323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.493635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.493666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.493861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.493892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.494085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.494118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.494359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.494390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.494595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.494626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.494752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.494783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.409 [2024-11-19 10:55:26.494979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.409 [2024-11-19 10:55:26.495011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.409 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.495218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.495250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.495420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.495452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.495576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.495608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.495821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.495862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.496061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.496094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.496305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.496336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.496511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.496544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.496661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.496693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.496873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.496905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.497092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.497124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.497295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.497327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.497528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.497558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.497755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.497787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.497909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.497940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.498141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.498173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.498348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.498379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.498581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.498614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.498761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.498793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.499016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.499051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.499239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.499270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.499445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.499477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.499658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.499690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.499881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.499912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.500107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.500140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.500257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.500288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.500527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.500558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.500695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.500730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.500849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.500880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.501004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.501037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.501303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.501334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.501583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.501652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.501864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.501900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.502187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.502223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.502406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.502436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.502679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.502711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.502893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.502925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.503110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.503144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.410 qpair failed and we were unable to recover it. 00:28:19.410 [2024-11-19 10:55:26.503265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.410 [2024-11-19 10:55:26.503295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.503414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.503446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.503570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.503601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.503791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.503822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.503958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.503990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.504253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.504285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.504457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.504489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.504633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.504664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.504855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.504886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.505054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.505086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.505259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.505290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.505408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.505441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.505696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.505726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.505928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.505970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.506090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.506121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.506245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.506278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.506453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.506484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.506677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.506708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.506828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.506857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.507048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.507081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.507260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.507292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.507416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.507446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.507706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.507738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.507959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.507992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.508225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.508258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.508517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.508547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.508784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.508815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.508995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.509028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.509218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.509249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.509515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.509546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.509672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.509703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.509887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.509919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.510108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.510149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.510326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.510365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.510554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.510586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.510786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.510818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.510938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.510994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.511202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.411 [2024-11-19 10:55:26.511233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.411 qpair failed and we were unable to recover it. 00:28:19.411 [2024-11-19 10:55:26.511437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.511471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.511607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.511639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.511838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.511870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.511976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.512008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.512307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.512338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.512512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.512544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.512780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.512811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.513008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.513042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.513145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.513176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.513285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.513316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.513499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.513530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.513728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.513760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.513946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.513987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.514174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.514205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.514469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.514501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.514762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.514795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.515002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.515036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.515297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.515328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.515510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.515540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.515661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.515691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.515956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.515989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.516162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.516193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.516384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.516417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.516516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.516546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.516736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.516767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.517024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.517056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.517247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.517278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.517481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.517512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.517730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.517762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.518002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.518034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.518233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.518266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.518452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.518484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.518745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.518776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.518904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.518934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.519188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.519219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.412 qpair failed and we were unable to recover it. 00:28:19.412 [2024-11-19 10:55:26.519495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.412 [2024-11-19 10:55:26.519533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.519648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.519678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.519897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.519930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.520126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.520158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.520336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.520367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.520535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.520567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.520801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.520832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.521015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.521047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.521282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.521313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.521504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.521534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.521718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.521749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.521929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.521970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.522229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.522261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.522522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.522553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.522731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.522762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.522896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.522928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.523159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.523191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.523373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.523403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.523594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.523624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.523863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.523893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.524029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.524061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.524245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.524277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.524485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.524516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.524754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.524785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.524887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.524917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.525139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.525170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.525344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.525374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.525670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.525701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.525870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.525901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.526038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.526071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.526199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.526230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.526417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.526448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.526556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.526587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.526764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.526794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.527017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.527052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.527187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.413 [2024-11-19 10:55:26.527217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.413 qpair failed and we were unable to recover it. 00:28:19.413 [2024-11-19 10:55:26.527325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.527358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.527561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.527592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.527853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.527884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.528071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.528103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.528278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.528317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.528423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.528453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.528639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.528672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.528842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.528873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.529126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.529157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.529334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.529365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.529552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.529581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.529697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.529728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.530011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.530042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.530219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.530251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.530510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.530541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.530729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.530760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.530982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.531015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.531196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.531228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.531405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.531435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.531618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.531649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.531831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.531861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.531993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.532025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.532212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.532244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.532355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.532384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.532571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.532604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.532792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.532822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.533002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.533034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.533143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.533174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.533384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.533417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.533675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.533706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.533916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.533959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.534088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.534123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.534296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.534326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.534535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.534565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.534751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.534781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.535041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.535072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.535275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.535305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.535425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.414 [2024-11-19 10:55:26.535455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.414 qpair failed and we were unable to recover it. 00:28:19.414 [2024-11-19 10:55:26.535588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.535619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.535737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.535767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.536034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.536068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.536257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.536289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.536470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.536501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.536738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.536769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.536899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.536937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.537066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.537097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.537281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.537313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.537442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.537472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.537643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.537673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.537921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.537962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.538163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.538193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.538366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.538396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.538564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.538595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.538780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.538812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.539050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.539083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.539269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.539300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.539509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.539541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.539804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.539836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.540022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.540055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.540244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.540274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.540447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.540479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.540596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.540627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.540865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.540896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.541178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.541211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.541326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.541357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.541538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.541568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.541742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.541774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.541877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.541907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.542098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.542132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.542313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.542342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.542465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.542495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.542621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.542652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.542832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.542863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.543056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.543088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.543270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.415 [2024-11-19 10:55:26.543300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.415 qpair failed and we were unable to recover it. 00:28:19.415 [2024-11-19 10:55:26.543536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.543567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.543685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.543717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.543896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.543928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.544116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.544147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.544383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.544414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.544584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.544615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.544723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.544756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.544969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.545002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.545181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.545213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.545493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.545532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.545660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.545691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.545804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.545834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.545965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.545998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.546291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.546323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.546559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.546590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.546770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.546801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.546920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.546971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.547159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.547190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.547453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.547485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.547614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.547644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.547818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.547849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.548032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.548063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.548193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.548226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.548505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.548536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.548707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.548739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.548911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.548942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.549085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.549117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.549356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.549387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.549638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.549669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.549786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.549818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.550007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.550039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.416 [2024-11-19 10:55:26.550287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.416 [2024-11-19 10:55:26.550319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.416 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.550523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.550555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.550722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.550754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.550992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.551024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.551132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.551163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.551344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.551376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.551639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.551671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.551775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.551806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.551997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.552030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.552215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.552246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.552501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.552531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.552706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.552738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.552990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.553021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.553147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.553178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.553425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.553456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.553695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.553728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.553968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.554000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.554115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.554145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.554313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.554351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.554587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.554618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.554809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.554841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.554971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.555005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.555241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.555512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.555543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.555723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.555755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.555924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.555983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.556220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.556252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.556372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.556403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.556614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.556646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.556765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.556796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.557045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.557077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.557313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.557345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.557606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.557638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.557873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.557905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.558101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.558134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.558257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.558288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.558458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.417 [2024-11-19 10:55:26.558490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.417 qpair failed and we were unable to recover it. 00:28:19.417 [2024-11-19 10:55:26.558660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.558692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.558820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.558852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.558979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.559012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.559203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.559235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.559402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.559434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.559641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.559672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.559789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.559821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.559996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.560027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.560244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.560275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.560545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.560577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.560749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.560780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.561015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.561047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.561231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.561263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.561512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.561543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.561787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.561819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.562059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.562093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.562279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.562311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.562552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.562584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.562755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.562786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.562916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.562958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.563152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.563182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.563302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.563340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.563601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.563632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.563822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.563853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.563969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.564001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.564268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.564299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.564535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.564565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.564750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.564781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.564970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.565001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.565188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.565220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.565325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.565355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.565468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.565500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.565694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.565725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.565860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.565892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.566103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.566136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.566265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.566296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.566465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.566496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.566684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.418 [2024-11-19 10:55:26.566716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.418 qpair failed and we were unable to recover it. 00:28:19.418 [2024-11-19 10:55:26.566839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.566870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.567104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.567136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.567264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.567295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.567538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.567571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.567785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.567817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.567938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.567992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.568194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.568226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.568401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.568430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.568611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.568642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.568775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.568805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.568992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.569065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.569233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.569269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.569447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.569480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.569674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.569706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.569813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.569846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.570056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.570091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.570273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.570305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.570495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.570528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.570654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.570686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.570969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.571003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.571137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.571169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.571351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.571387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.571641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.571674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.571910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.571942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.572136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.572168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.572287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.572319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.572506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.572538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.572801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.572832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.573015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.573050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.573176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.573208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.573336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.573367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.573548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.573581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.573698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.573730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.573991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.574025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.574202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.574234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.574475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.574505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.574685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.574716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.419 [2024-11-19 10:55:26.574934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.419 [2024-11-19 10:55:26.574979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.419 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.575119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.575150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.575372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.575402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.575572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.575603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.575728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.575757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.576017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.576050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.576171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.576202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.576304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.576333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.576514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.576545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.576796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.576828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.576968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.577000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.577249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.577281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.577466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.577497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.577755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.577793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.578052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.578086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.578257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.578286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.578465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.578496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.578754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.578785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.579018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.579049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.579289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.579320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.579489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.579518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.579706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.579739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.579943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.579984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.580166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.580198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.580444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.580476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.580667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.580699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.580882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.580913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.581217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.581254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.581444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.581477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.581714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.581746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.581937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.581983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.582159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.582190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.582433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.582465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.582644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.582676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.582941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.582985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.583173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.583205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.583484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.583516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.583654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.583686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.420 [2024-11-19 10:55:26.583879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.420 [2024-11-19 10:55:26.583911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.420 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.584178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.584211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.584330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.584368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.584494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.584525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.584766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.584798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.584922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.584964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.585231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.585264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.585433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.585466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.585659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.585691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.585879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.585911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.586107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.586141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.586329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.586361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.586476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.586508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.586691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.586723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.586913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.586945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.587165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.587197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.587397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.587430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.587600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.587633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.587819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.587850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.588056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.588091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.588264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.588296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.588426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.588457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.588583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.588615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.588784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.588815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.589051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.589085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.589323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.589356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.589562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.589594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.589766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.589798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.590006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.590039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.590224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.590256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.590433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.590465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.590585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.590618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.590806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.590838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.591021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.421 [2024-11-19 10:55:26.591054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.421 qpair failed and we were unable to recover it. 00:28:19.421 [2024-11-19 10:55:26.591158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.591189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.591373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.591405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.591593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.591625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.591803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.591835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.591969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.592005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.592185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.592217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.592357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.592389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.592561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.592593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.592831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.592864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.592977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.593012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.593143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.593175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.593360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.593393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.593796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.593832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.594023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.594058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.594233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.594266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.594503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.594536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.594797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.594829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.595010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.595045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.595148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.595180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.595369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.595402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.595605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.595637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.595816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.595848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.596035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.596069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.596203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.596235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.596419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.596450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.596562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.596596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.596841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.596872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.597132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.597167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.597355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.597388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.597592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.597624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.597819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.597850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.598087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.598121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.598354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.598385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.598511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.598544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.598803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.598835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.599041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.599075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.599323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.599360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.599626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.599658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.422 qpair failed and we were unable to recover it. 00:28:19.422 [2024-11-19 10:55:26.599895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.422 [2024-11-19 10:55:26.599927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.600052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.600085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.600345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.600376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.600556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.600589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.600727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.600758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.600877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.600909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.601036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.601069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.601247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.601278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.601400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.601431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.601554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.601586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.601821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.601853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.602022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.602056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.602246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.602279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.602470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.602503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.602638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.602670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.602841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.602873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.603109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.603142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.603243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.603275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.603396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.603427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.603614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.603647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.603816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.603847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.604082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.604115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.604243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.604275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.604478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.604509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.604620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.604653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.604893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.604936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.605130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.605162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.605280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.605311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.605499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.605531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.605770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.605802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.605975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.606009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.606139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.606170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.606289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.606321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.606513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.606544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.606729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.606762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.607001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.607033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.607229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.607261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.607374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.607405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.423 [2024-11-19 10:55:26.607659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.423 [2024-11-19 10:55:26.607692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.423 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.607884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.607917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.608042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.608075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.608255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.608288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.608457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.608488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.608675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.608707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.608881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.608912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.609101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.609134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.609342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.609373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.609635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.609668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.609852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.609883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.610154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.610189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.610478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.610509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.610629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.610662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.610922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.610969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.611101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.611133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.611259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.611291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.611412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.611444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.611627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.611658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.611891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.611924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.612115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.612149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.612407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.612439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.612643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.612674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.612858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.612890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.613075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.613109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.613351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.613383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.613496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.613527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.613661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.613693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.613874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.614092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.614126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.614298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.614329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.614445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.614478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.614615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.614646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.614906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.614938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.615079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.615111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.615218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.615250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.615433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.615465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.615640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.615673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.615790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.424 [2024-11-19 10:55:26.615822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.424 qpair failed and we were unable to recover it. 00:28:19.424 [2024-11-19 10:55:26.615998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.616032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.616214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.616246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.616374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.616405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.616597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.616630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.616806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.616838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.617095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.617128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.617230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.617261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.617522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.617555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.617691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.617722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.617828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.617860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.618029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.618061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.618233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.618265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.618434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.618466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.618643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.618675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.618927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.618967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.619146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.619178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.619438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.619509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.619708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.619743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.619935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.619980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.620105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.620137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.620390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.620422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.620558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.620590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.620783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.620815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.621077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.621110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.621281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.621313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.621499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.621531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.621647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.621679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.621796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.621828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.622022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.622069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.622178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.622220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.622346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.622377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.622635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.622667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.622852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.622884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.623010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.425 [2024-11-19 10:55:26.623043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.425 qpair failed and we were unable to recover it. 00:28:19.425 [2024-11-19 10:55:26.623281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.623313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.623506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.623538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.623709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.623740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.623908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.623940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.624131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.624165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.624377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.624409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.624600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.624632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.624820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.624857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.625080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.625113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.625306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.625338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.625551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.625583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.625763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.625795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.625925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.625970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.626142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.626174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.626346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.626378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.626621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.626653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.626824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.626855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.627037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.627071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.627241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.627273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.627410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.627441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.627626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.627658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.627841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.627873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.628006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.628045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.628270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.628302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.628475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.628507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.628682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.628713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.628972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.629006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.629135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.629167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.629336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.629367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.629537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.629569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.629755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.629787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.629908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.629939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.630066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.630099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.630357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.630389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.630625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.630657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.630766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.630797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.631078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.631113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.426 [2024-11-19 10:55:26.631288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.426 [2024-11-19 10:55:26.631319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.426 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.631520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.631552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.631790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.631822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.631939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.631980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.632236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.632268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.632409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.632441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.632623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.632656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.632774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.632806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.632994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.633027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.633201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.633233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.633342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.633374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.633551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.633581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.633763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.633795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.633971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.634004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.634129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.634160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.634423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.634455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.634642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.634673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.634855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.634886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.635011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.635044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.635225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.635257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.635454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.635485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.635741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.635773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.636010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.636044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.636178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.636209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.636324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.636355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.636622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.636660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.636844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.636876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.637045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.637077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.637250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.637282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.637545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.637577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.637765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.637796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.637987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.638020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.638236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.638268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.638528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.638559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.638693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.638726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.638835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.638866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.639066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.639099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.639284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.639315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.427 [2024-11-19 10:55:26.639512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.427 [2024-11-19 10:55:26.639545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.427 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.639789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.639821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.640076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.640109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.640231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.640263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.640467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.640497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.640681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.640713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.640970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.641003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.641125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.641157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.641392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.641423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.641604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.641635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.641818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.641849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.642041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.642074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.642309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.642340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.642468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.642501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.642619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.642651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.642765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.642796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.643060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.643093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.643214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.643246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.643358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.643390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.643521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.643553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.643731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.643763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.643945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.644005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.644193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.644225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.644338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.644369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.644547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.644579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.644785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.644816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.645000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.645033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.645292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.645330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.645577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.645608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.645776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.645808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.646066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.646099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.646278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.646310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.646437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.646469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.646638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.646669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.646793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.646832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.647012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.647044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.647152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.647185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.647421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.647452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.428 qpair failed and we were unable to recover it. 00:28:19.428 [2024-11-19 10:55:26.647648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.428 [2024-11-19 10:55:26.647680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.647811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.647843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.648058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.648090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.648288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.648320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.648608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.648640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.648762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.648792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.649047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.649080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.649183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.649215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.649402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.649434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.649610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.649642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.649809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.649841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.650098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.650130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.650416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.650448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.650715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.650748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.650966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.650998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.651132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.651163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.651303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.651335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.651525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.651556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.651744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.651775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.652056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.652088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.652199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.652230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.652421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.652452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.652556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.652586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.652824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.652856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.653032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.653065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.653199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.653231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.653350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.653381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.653648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.653680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.653915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.653957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.654222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.654258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.654517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.654549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.654681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.654712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.654822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.654853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.655026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.655058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.655347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.655378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.655513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.655545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.655736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.655767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.655940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.429 [2024-11-19 10:55:26.655981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.429 qpair failed and we were unable to recover it. 00:28:19.429 [2024-11-19 10:55:26.656178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.656209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.656451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.656482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.656621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.656653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.656841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.656871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.657052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.657085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.657269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.657301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.657561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.657593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.657848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.657878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.658050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.658083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.658358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.658389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.658571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.658602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.658722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.658754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.658937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.658979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.659157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.659188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.659447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.659478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.659651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.659681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.659863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.659894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.660042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.660074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.660192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.660222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.660404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.660436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.660646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.660677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.660914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.660945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.661083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.661115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.661240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.661271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.661451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.661482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.661721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.661752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.661870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.661901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.662156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.662189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.662427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.662459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.662728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.662759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.662998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.663031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.663273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.663311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.663495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.663527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.430 [2024-11-19 10:55:26.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.430 [2024-11-19 10:55:26.663747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.430 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.663934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.663975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.664235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.664266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.664511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.664542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.664724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.664755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.664994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.665027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.665216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.665247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.665374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.665404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.665587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.665618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.665802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.665833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.666008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.666041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.666232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.666264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.666531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.666562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.666747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.666779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.666990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.667023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.667201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.667233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.667402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.667434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.667612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.667644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.667814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.667845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.668019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.668052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.668310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.668343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.668472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.668503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.668627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.668659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.668852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.668883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.669010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.669043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.669287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.669318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.669537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.669568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.669755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.669786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.669898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.669929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.670074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.670107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.670293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.670325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.670586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.670617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.670791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.670822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.670942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.670985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.671224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.671255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.671426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.671457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.671651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.671682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.671801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.671831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.431 [2024-11-19 10:55:26.671959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.431 [2024-11-19 10:55:26.671998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.431 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.672173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.672205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.672463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.672493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.672599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.672630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.672870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.672901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.673093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.673126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.673364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.673394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.673577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.673608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.673789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.673820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.674060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.674093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.674220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.674252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.674457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.674489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.674730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.674761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.674932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.674972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.675224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.675256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.675384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.675415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.675612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.675642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.675827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.675858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.676111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.676143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.676331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.676362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.676535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.676566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.676832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.676864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.677071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.677105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.677227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.677258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.677496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.677528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.677711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.677742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.677913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.677945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.678153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.678185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.678382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.678413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.678679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.678711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.678963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.678996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.679236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.679268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.679439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.679471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.679734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.679766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.679938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.679980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.680174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.680207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.680379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.680410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.680546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.680577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.432 qpair failed and we were unable to recover it. 00:28:19.432 [2024-11-19 10:55:26.680821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.432 [2024-11-19 10:55:26.680853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.680978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.681012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.681311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.681349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.681533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.681564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.681697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.681727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.681921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.681963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.682104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.682135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.682396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.682427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.682612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.682643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.682844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.682876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.683054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.683086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.683198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.683230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.683518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.683550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.683802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.683833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.684029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.684061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.684302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.684334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.684460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.684493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.684606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.684637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.684839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.684870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.685052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.685085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.685264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.685294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.685470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.685501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.685621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.685651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.685787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.685818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.686020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.686052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.686183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.686216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.686346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.686377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.686493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.686524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.686717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.686748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.686972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.687006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.687213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.687246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.687369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.687401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.687620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.687650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.687847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.687878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.687999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.688032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.688343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.688375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.688614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.688645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.688757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.433 [2024-11-19 10:55:26.688789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.433 qpair failed and we were unable to recover it. 00:28:19.433 [2024-11-19 10:55:26.688924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.688973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.689164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.689195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.689434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.689466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.689590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.689621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.689800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.689836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.690116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.690149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.690337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.690368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.690554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.690585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.690828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.690860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.691060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.691093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.691278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.691310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.691426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.691458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.691700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.691731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.691925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.691965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.692136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.692168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.692339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.692370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.692654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.692686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.692968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.693000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.693219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.693250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.693514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.693545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.693793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.693825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.694063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.694095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.694300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.694331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.694524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.694555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.694727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.694758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.694861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.694892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.695092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.695124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.695434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.695464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.695649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.695681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.695921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.695962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.696102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.696133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.696342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.696374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.696574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.696605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.696778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.696809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.434 [2024-11-19 10:55:26.696938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.434 [2024-11-19 10:55:26.696992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.434 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.697182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.697214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.697336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.697368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.697537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.697568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.697830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.697862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.698061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.698095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.698264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.698295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.698472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.698504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.698636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.698667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.698904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.698935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.699056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.699094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.699271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.699301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.699486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.699517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.699780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.699811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.699994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.700026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.700279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.700311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.700550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.700582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.700768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.700799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.700984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.701017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.701124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.701155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.701278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.701309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.701490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.701521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.701694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.701726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.702103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.702137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.702282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.702315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.702449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.702480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.702742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.702773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.702944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.702983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.703168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.703200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.703323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.703354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.703538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.703569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.703772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.703804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.703939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.703980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.704157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.704187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.704369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.704400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.704524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.704554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.704734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.704765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.705001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.435 [2024-11-19 10:55:26.705071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.435 qpair failed and we were unable to recover it. 00:28:19.435 [2024-11-19 10:55:26.705299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.705367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.705525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.705560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.705751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.705784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.705997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.706032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.706171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.706204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.706313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.706344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.706472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.706503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.706687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.706718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.707005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.707038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.707220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.707251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.707433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.707465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.707570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.707602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.707863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.707903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.708095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.708128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.708297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.708329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.708516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.708547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.708727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.708758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.708883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.708916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.709207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.709248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.709448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.709482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.709653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.709684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.709942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.709991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.710165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.710197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.710438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.710470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.710716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.710748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.710936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.710980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.711182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.711214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.711403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.711435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.711621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.711652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.711832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.711863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.712124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.712158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.712346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.712377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.712586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.712618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.712740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.712772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.712888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.712919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.713051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.713083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.713266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.713297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.713533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.436 [2024-11-19 10:55:26.713564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.436 qpair failed and we were unable to recover it. 00:28:19.436 [2024-11-19 10:55:26.713762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.713794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.714056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.714098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.714226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.714258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.714506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.714538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.714721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.714752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.714939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.714981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.715247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.715279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.715516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.715547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.715661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.715692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.715962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.715996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.716206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.716237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.716418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.716449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.716636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.716667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.716855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.716886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.717125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.717159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.717361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.717399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.717658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.717691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.717832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.717863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.718045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.718078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.718337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.718369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.718557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.718588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.718778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.718808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.718943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.718985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.719175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.719206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.719467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.719499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.719690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.719721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.719896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.719927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.720125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.720157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.720338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.720377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.720504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.720536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.720663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.720694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.720825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.720856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.721029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.721062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.721249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.721280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.721483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.721514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.721703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.721734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.721921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.721963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.722139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.437 [2024-11-19 10:55:26.722170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-11-19 10:55:26.722435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.722467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.722648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.722678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.722806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.722837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.723051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.723084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.723267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.723300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.723411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.723442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.723580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.723611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.723744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.723776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.723885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.723916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.724181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.724213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.724401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.724433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.724669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.724699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.724833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.724864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.725131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.725163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.725339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.725370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.725497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.725528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.725764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.725795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.726042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.726095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.726283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.726315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.726503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.726534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.726785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.726816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.727076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.727108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.727227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.727259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.727389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.727420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.727655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.727686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.727867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.727898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.728029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.728062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.728202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.728234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.728363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.728394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.728566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.728597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.728719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.728757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.728927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.728976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.729173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.729203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.729335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.729366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.729539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.729570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.729751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.729782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.730046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.730079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-11-19 10:55:26.730259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.438 [2024-11-19 10:55:26.730290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.730542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.730574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.730811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.730842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.730975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.731008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.731130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.731162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.731287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.731318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.731437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.731469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.731738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.731770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.731997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.732029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.732160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.732191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.732382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.732413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.732657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.732688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.732809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.732841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.733106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.733139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.733323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.733354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.733544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.733575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.733755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.733787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.733976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.734009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.734219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.734251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.734430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.734462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.734644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.734676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.734887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.734919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.735132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.735164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.735418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.735450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.735632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.735663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.735791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.735821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.735943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.735986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.736175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.736206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.736344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.736392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.736589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.736619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.736811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.736843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.737090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.737122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.737304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.737336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.737462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.737499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.439 [2024-11-19 10:55:26.737628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.439 [2024-11-19 10:55:26.737659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.439 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.737900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.737931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.738229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.738262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.738514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.738544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.738678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.738709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.738959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.738991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.739181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.739212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.739471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.739503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.739752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.739784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.739972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.740004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.740224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.740255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.740432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.740464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.740683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.740714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.740862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.740894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.741152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.741185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.741363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.741394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.741655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.741687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.741905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.741937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.742230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.742262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.742522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.742554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.742806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.742837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.743027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.743060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.743249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.743281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.743467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.743498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.743689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.743719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.743913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.743944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.744155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.744188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.744451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.744483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.744704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.744734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.744974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.745006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.745189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.745220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.745329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.745360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.745544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.745575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.745816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.745846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.746053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.746086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.746203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.746235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.746418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.746449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.746567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.440 [2024-11-19 10:55:26.746598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.440 qpair failed and we were unable to recover it. 00:28:19.440 [2024-11-19 10:55:26.746788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.746820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.747008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.747050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.747242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.747274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.747545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.747577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.747841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.747873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.748061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.748095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.748266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.748298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.748428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.748459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.748724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.748754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.748929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.748969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.749101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.749132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.749265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.749296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.749537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.749567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.749751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.749782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.750040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.750071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.750267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.750299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.750544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.750576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.750750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.750781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.751070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.751102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.751313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.751345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.751536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.751567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.751745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.751776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.752034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.752067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.752279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.752311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.752487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.752518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.752755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.752787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.753013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.753045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.753261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.753292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.753576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.753609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.753870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.753901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.754045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.754078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.754355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.754386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.754623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.754654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.754895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.754926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.755055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.755088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.755269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.755301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.755486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.755517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.441 [2024-11-19 10:55:26.755689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.441 [2024-11-19 10:55:26.755720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.441 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.755892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.755924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.756172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.756204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.756438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.756469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.756718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.756755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.756972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.757005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.757136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.757168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.757276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.757307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.757585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.757617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.757749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.757780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.757903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.757933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.758205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.758237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.758417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.758448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.758658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.758689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.758924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.758964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.759079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.759110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.759239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.759270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.759573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.759604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.759789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.759821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.760013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.760044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.760280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.760312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.760439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.760470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.760728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.760758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.760873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.760905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.761170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.761204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.761459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.761492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.761618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.761649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.761889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.761920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.762241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.762274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.762461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.762492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.762672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.762703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.762890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.762922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.763127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.763159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.763349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.763381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.763659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.763690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.763822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.763853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.763980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.764013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.764132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.442 [2024-11-19 10:55:26.764164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.442 qpair failed and we were unable to recover it. 00:28:19.442 [2024-11-19 10:55:26.764343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.764376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.764497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.764529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.764703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.764734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.764979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.765012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.765206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.765237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.765523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.765554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.765764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.765803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.765977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.766010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.766218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.766248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.766377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.766409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.766595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.766626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.766807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.766838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.766970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.767005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.767302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.767333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.767573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.767604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.767784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.767816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.767936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.767978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.768164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.768195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.768366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.768396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.768529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.768561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.768698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.768730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.768969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.769003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.769121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.769154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.769337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.769368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.769490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.769521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.769695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.769726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.769915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.769974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.770101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.770133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.770319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.770350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.770471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.770502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.770699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.770731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.770936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.770978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.771097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.771128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.771229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.771267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.771549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.771580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.771698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.771729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.771921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.771961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.772136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.443 [2024-11-19 10:55:26.772166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.443 qpair failed and we were unable to recover it. 00:28:19.443 [2024-11-19 10:55:26.772339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.772370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.772568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.772598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.772785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.772816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.772958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.772990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.773233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.773265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.773544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.773575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.773778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.773809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.773937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.773980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.774242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.774274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.774450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.774482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.774670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.774702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.774957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.774990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.775197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.775230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.775340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.775371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.775545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.775577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.775694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.775726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.775908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.775940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.776080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.776112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.776292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.776324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.776570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.776601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.776767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.776798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.777082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.777116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.777342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.777373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.777553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.777584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.777757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.777788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.777917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.777973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.778076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.778106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.778214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.778244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.778363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.778394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.778637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.778667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.778800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.778832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.779071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.779104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.779222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.444 [2024-11-19 10:55:26.779253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.444 qpair failed and we were unable to recover it. 00:28:19.444 [2024-11-19 10:55:26.779379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.779411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.779541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.779572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.779745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.779782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.780063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.780095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.780198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.780230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.780342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.780373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.780478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.780509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.780615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.780646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.780898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.780929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.781131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.781164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.781293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.781325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.781513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.781543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.781735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.781766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.782005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.782039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.782211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.782243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.782424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.782455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.782570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.782601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.782871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.782902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.783029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.783061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.783237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.783268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.783457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.783488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.783673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.783704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.783944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.783985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.784117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.784149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.784269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.784300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.784404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.784435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.784618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.784649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.784888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.784919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.785118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.785150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.785267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.785299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.785481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.785512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.785629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.785659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.785785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.785816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.785919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.785961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.786239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.786270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.786443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.786474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.786596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.786627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.786743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.445 [2024-11-19 10:55:26.786774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.445 qpair failed and we were unable to recover it. 00:28:19.445 [2024-11-19 10:55:26.786969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.787001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.787246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.787278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.787459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.787489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.787681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.787712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.787895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.787937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.788170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.788202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.788392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.788425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.788616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.788647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.788824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.788855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.789039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.789073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.789270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.789302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.789541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.789572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.789748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.789780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.789899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.789933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.790114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.790147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.790273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.790304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.790502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.790532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.790735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.790768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.790973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.791008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.791194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.791225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.791414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.791446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.791566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.791598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.791705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.791735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.791995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.792028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.792226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.792257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.792384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.792414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.792681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.792711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.792835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.792866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.793047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.793080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.793213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.793244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.793360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.793391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.793607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.793639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.793758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.793789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.793904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.793936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.794187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.794218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.794391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.794423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.794654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.794685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.794929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.446 [2024-11-19 10:55:26.794970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.446 qpair failed and we were unable to recover it. 00:28:19.446 [2024-11-19 10:55:26.795185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.795217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.795337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.795369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.795546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.795577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.795812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.795843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.796019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.796051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.796179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.796210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.796324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.796361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.796538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.796569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.796750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.796781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.796972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.797004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.797136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.797169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.797293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.797324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.797463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.797493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.797755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.797787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.797979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.798011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.798132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.798164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.798277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.798308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.798499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.798529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.798707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.798738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.798857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.798887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.799131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.799164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.799288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.799321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.799507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.799537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.799666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.799705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.799912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.799944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.800197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.800230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.800375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.800408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.800524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.800555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.800732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.800763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.801033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.801067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.801203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.801237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.801367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.801399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.801572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.801604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.801714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.801746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.801851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.801883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.802015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.802047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.802176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.802207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.447 qpair failed and we were unable to recover it. 00:28:19.447 [2024-11-19 10:55:26.802386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.447 [2024-11-19 10:55:26.802417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.802609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.802641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.802754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.802784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.803028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.803060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.803251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.803283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.803451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.803482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.803584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.803615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.803719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.803750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.804001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.804033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.804275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.804312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.804526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.804558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.804738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.804770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.805008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.805041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.805209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.805243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.805349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.805381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.805587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.805618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.805849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.805881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.806126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.806158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.806369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.806400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.806591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.806622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.806812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.806843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.807018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.807050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.807238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.807269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.807513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.807545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.807730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.807761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.807892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.807923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.808173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.808205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.808388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.808420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.808604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.808635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.808817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.808849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.809033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.809065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.809247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.809279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.809405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.809435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.809613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.809645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.809842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.809873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.448 [2024-11-19 10:55:26.810045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.448 [2024-11-19 10:55:26.810077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.448 qpair failed and we were unable to recover it. 00:28:19.449 [2024-11-19 10:55:26.810199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.449 [2024-11-19 10:55:26.810229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.449 qpair failed and we were unable to recover it. 00:28:19.449 [2024-11-19 10:55:26.810475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.449 [2024-11-19 10:55:26.810506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.449 qpair failed and we were unable to recover it. 00:28:19.449 [2024-11-19 10:55:26.810626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.449 [2024-11-19 10:55:26.810658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.449 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.810858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.810890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.811027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.811058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.811250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.811281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.811469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.811505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.811715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.811748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.811938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.811982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.812092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.812123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.812311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.812343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.812463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.812493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.812601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.812632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.812831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.812868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.813046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.813079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.813250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.813283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.813419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.813450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.813687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.813718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.813895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.813926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.814109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.814142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.814339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.814370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.814503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.814534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.814641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.814673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.814786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.814818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.814940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.766 [2024-11-19 10:55:26.814997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.766 qpair failed and we were unable to recover it. 00:28:19.766 [2024-11-19 10:55:26.815112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.815143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.815341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.815373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.815501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.815533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.815723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.815755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.815884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.815916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.816050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.816082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.816267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.816299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.816508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.816540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.816658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.816689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.816928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.816969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.817185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.817217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.817397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.817428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.817602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.817633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.817847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.817878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.818000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.818032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.818218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.818250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.818443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.818475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.818595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.818626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.818827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.818858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.819099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.819131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.819246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.819277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.819535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.819567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.819746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.819777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.819971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.820003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.820117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.820147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.820330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.820360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.820544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.820576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.820690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.820722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.820845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.820881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.821118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.821150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.821253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.821283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.821424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.821456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.821623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.821654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.821825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.821857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.822030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.822064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.822241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.822272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.822464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.822496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.767 [2024-11-19 10:55:26.822678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.767 [2024-11-19 10:55:26.822709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.767 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.822880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.822911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.823132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.823165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.823342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.823373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.823557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.823599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.823859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.823895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.824097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.824130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.824325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.824357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.824477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.824509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.824716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.824747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.824919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.824963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.825152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.825183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.825362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.825394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.825571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.825602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.825809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.825841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.825966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.825999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.826118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.826150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.826396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.826427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.826653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.826723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.826863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.826899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.827091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.827125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.827348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.827380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.827503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.827534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.827657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.827690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.827887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.827918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.828105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.828176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.828394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.828428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.828536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.828568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.828679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.828711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.828981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.829015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.829203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.829235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.829406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.829447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.829629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.829664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.829844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.829876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.829986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.830018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.830137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.830168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.830339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.830370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.830489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.830520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.768 qpair failed and we were unable to recover it. 00:28:19.768 [2024-11-19 10:55:26.830692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.768 [2024-11-19 10:55:26.830724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.830848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.830878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.831023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.831055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.831232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.831264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.831439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.831470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.831591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.831622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.831860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.831890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.832026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.832060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.832185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.832215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.832408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.832439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.832681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.832712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.832886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.832916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.833051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.833084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.833287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.833320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.833524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.833554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.833787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.833818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.834001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.834034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.834210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.834240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.834352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.834384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.834557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.834588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.834781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.834817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.834992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.835025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.835214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.835247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.835355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.835386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.835506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.835539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.835728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.835758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.835970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.836004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.836247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.836279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.836453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.836485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.836602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.836634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.836747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.836778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.836886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.836917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.837045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.837080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.837257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.837288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.837479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.837509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.837706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.837736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.837857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.837886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.838010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.769 [2024-11-19 10:55:26.838042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.769 qpair failed and we were unable to recover it. 00:28:19.769 [2024-11-19 10:55:26.838289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.838321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.838439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.838470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.838644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.838676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.838790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.838822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.839003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.839035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.839155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.839186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.839321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.839352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.839464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.839494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.839685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.839715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.839831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.839866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.840007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.840040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.840169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.840201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.840333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.840364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.840471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.840503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.840638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.840670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.840851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.840882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.841002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.841035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.841214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.841247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.841438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.841469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.841718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.841751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.841864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.841896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.842102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.842135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.842257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.842289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.842469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.842502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.842628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.842660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.842869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.842900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.843095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.843128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.843322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.843354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.843468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.843500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.843685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.843716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.843897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.843929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.844126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.844159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.844363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.844394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.844518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.844549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.770 qpair failed and we were unable to recover it. 00:28:19.770 [2024-11-19 10:55:26.844668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.770 [2024-11-19 10:55:26.844699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.844816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.844847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.844986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.845022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.845203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.845234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.845409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.845439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.845636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.845666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.845856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.845886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.846079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.846110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.846400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.846432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.846564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.846595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.846838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.846868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.846978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.847009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.847140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.847170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.847349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.847380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.847554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.847584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.847704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.847733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.847916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.847955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.848126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.848157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.848350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.848381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.848486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.848516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.848693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.848724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.848992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.849027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.849154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.849186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.849383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.849414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.849547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.849577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.849704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.849735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.849862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.849892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.850013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.850044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.850299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.850331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.850511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.850542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.850673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.850705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.850819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.850849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.851092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.851126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.851271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.851303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.851498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.851530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.771 qpair failed and we were unable to recover it. 00:28:19.771 [2024-11-19 10:55:26.851713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.771 [2024-11-19 10:55:26.851744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.851864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.851895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.852081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.852114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.852227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.852259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.852387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.852417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.852666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.852697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.852897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.852929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.853067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.853105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.853295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.853327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.853442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.853472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.853580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.853611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.853791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.853823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.854088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.854120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.854329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.854360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.854541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.854571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.854690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.854721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.854893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.854925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.855044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.855075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.855204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.855235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.855432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.855464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.855573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.855604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.855781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.855812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.855943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.855984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.856165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.856196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.856311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.856341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.856462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.856492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.856618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.856650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.856830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.856860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.856987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.857020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.857194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.857226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.857423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.857455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.857568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.857598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.857778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.857809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.857914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.857946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.858083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.858113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.858283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.858315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.858441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.858473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.858676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.772 [2024-11-19 10:55:26.858707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.772 qpair failed and we were unable to recover it. 00:28:19.772 [2024-11-19 10:55:26.858817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.858848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.859092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.859125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.859260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.859292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.859464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.859496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.859608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.859638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.859816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.859848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.859980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.860013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.860195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.860226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.860354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.860385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.860559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.860596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.860701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.860731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.860916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.860967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.861151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.861181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.861305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.861337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.861522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.861553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.861739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.861769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.861885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.861915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.862044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.862077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.862207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.862237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.862381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.862412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.862533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.862563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.862742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.862772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.862889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.862920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.863057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.863088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.863211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.863241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.863350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.863379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.863487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.863517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.863629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.863661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.863852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.863882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.864056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.864089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.864286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.864318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.864510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.864541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.864671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.864703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.864810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.864840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.864967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.865001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.865184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.865215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.865358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.865388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.865500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.773 [2024-11-19 10:55:26.865532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.773 qpair failed and we were unable to recover it. 00:28:19.773 [2024-11-19 10:55:26.865774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.865805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.865921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.865959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.866155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.866185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.866287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.866318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.866488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.866520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.866648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.866678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.866796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.866826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.866928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.866967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.867088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.867118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.867232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.867263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.867382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.867412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.867519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.867556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.867674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.867704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.867896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.867927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.868115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.868147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.868257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.868287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.868404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.868434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.868557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.868589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.868695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.868726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.868903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.868935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.869156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.869190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.869309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.869341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.869458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.869489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.869635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.869667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.869788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.869819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.869944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.869987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.870112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.870144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.870250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.870281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.870544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.870577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.870687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.870719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.870836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.870866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.871001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.871034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.871212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.871244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.871414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.871445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.871564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.871594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.871709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.871740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.871862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.871892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.774 [2024-11-19 10:55:26.872015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.774 [2024-11-19 10:55:26.872046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.774 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.872183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.872215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.872390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.872420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.872542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.872573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.872680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.872711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.872838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.872869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.872987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.873020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.873130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.873163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.873334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.873363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.873474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.873503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.873700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.873729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.873828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.873857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.873975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.874005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.874188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.874217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.874345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.874382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.874572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.874602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.874721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.874752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.874866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.874898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.875043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.875073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.875195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.875224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.875348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.875377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.875488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.875517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.875636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.875665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.875768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.875796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.875898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.875928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.876123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.876155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.876282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.876315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.876435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.876467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.876590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.876622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.876800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.876833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.877003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.877034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.877137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.877179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.877288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.877318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.877421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.775 [2024-11-19 10:55:26.877452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.775 qpair failed and we were unable to recover it. 00:28:19.775 [2024-11-19 10:55:26.877561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.877593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.877706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.877737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.877840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.877871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.877982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.878014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.878142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.878171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.878281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.878310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.878520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.878548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.878825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.878854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.878969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.878999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.879102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.879131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.879240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.879268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.879436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.879465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.879598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.879630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.879742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.879773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.879962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.879995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.880120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.880165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.880300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.880329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.880497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.880526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.880634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.880662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.880761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.880804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.880977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.881017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.881138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.881169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.881276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.881307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.881433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.881461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.881572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.881600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.881773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.881801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.881897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.881925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.882037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.882066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.882160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.882189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.882322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.882351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.882461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.882490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.882721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.882750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.882987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.883017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.883194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.883223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.883339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.883366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.883474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.883501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.883620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.883646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.776 [2024-11-19 10:55:26.883741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.776 [2024-11-19 10:55:26.883768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.776 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.883937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.883974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.884088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.884114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.884291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.884323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.884426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.884458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.884632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.884664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.884770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.884802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.884923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.884995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.885116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.885143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.885248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.885274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.885528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.885554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.885716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.885742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.885848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.885874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.885982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.886010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.886114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.886141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.886248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.886274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.886449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.886476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.886593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.886624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.886733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.886764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.886879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.886911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.887057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.887091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.887218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.887245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.887349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.887376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.887479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.887511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.887739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.887765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.887934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.887967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.888150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.888177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.888286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.888313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.888486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.888518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.888651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.888682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.888786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.888818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.888945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.888986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.889165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.889197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.889388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.889420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.889593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.889625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.889733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.889765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.889896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.889928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.890202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.890235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.777 qpair failed and we were unable to recover it. 00:28:19.777 [2024-11-19 10:55:26.890477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.777 [2024-11-19 10:55:26.890508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.890692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.890722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.890850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.890881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.890999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.891040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.891209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.891235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.891348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.891374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.891485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.891511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.891708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.891750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.891854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.891885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.892004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.892037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.892219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.892251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.892371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.892398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.892628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.892654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.892749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.892776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.892877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.892903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.893091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.893124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.893238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.893269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.893509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.893542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.893731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.893762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.893939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.893979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.894084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.894115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.894242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.894273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.894397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.894428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.894636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.894666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.894835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.894867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.895053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.895092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.895275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.895305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.895408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.895440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.895563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.895589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.895686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.895712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.895878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.895904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.896168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.896195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.896360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.896386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.896489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.896530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.896699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.896730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.896847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.896878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.897003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.897035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.897349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.778 [2024-11-19 10:55:26.897376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.778 qpair failed and we were unable to recover it. 00:28:19.778 [2024-11-19 10:55:26.897548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.897574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.897809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.897836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.897944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.897978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.898081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.898108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.898289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.898315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.898406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.898433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.898546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.898572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.898679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.898706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.898821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.898848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.898945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.898979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.899140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.899166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.899336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.899361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.899451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.899477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.899572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.899598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.899834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.899904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.900182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.900219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.900345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.900379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.900595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.900627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.900811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.900842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.900972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.901006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.901124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.901155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.901283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.901316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.901487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.901518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.901633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.901664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.901784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.901816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.901988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.902023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.902205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.902238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.902359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.902400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.902609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.902640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.902821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.902853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.903035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.903068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.903310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.903341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.903467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.903498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.903686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.903719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.903981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.904014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.904137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.904172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.904365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.904398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.779 [2024-11-19 10:55:26.904570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.779 [2024-11-19 10:55:26.904601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.779 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.904735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.904766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.904959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.904992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.905168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.905200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.905320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.905352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.905552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.905584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.905774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.905805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.905973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.906006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.906120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.906151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.906275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.906306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.906423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.906454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.906637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.906668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.906842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.906872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.906974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.907007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.907189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.907220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.907341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.907372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.907492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.907524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.907706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.907738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.907866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.907897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.908101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.908135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.908269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.908301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.908480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.908511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.908702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.908732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.908868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.908900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.909040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.909073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.909324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.909356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.909539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.909569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.909753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.909785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.909971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.910004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.910131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.910162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.910289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.910338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.910525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.910557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.910729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.910760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.911000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.780 [2024-11-19 10:55:26.911032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.780 qpair failed and we were unable to recover it. 00:28:19.780 [2024-11-19 10:55:26.911161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.911193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.911387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.911419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.911688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.911721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.911905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.911938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.912199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.912229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.912364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.912397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.912576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.912608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.912786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.912816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.913002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.913035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.913235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.913267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.913397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.913429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.913552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.913584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.913773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.913804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.913923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.913967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.914084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.914116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.914305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.914336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.914467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.914498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.914634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.914665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.914780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.914812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.914993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.915025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.915201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.915233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.915511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.915543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.915663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.915695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.915885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.915918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.916054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.916085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.916202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.916233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.916341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.916373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.916480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.916511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.916730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.916762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.916936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.916977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.917099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.917131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.917316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.917348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.917518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.917549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.917774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.917805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.918051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.918085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.918322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.918353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.918472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.918509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.918615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.781 [2024-11-19 10:55:26.918647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.781 qpair failed and we were unable to recover it. 00:28:19.781 [2024-11-19 10:55:26.918770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.918802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.918987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.919020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.919149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.919181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.919301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.919332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.919503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.919534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.919647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.919678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.919800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.919832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.920028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.920066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.920308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.920340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.920475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.920507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.920685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.920716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.920909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.920941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.921134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.921167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.921340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.921371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.921476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.921507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.921708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.921740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.921920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.921961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.922079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.922110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.922283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.922316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.922451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.922483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.922602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.922632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.922817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.922848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.922980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.923013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.923253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.923285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.923397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.923428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.923546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.923577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.923686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.923718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.923858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.923891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.924007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.924039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.924244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.924276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.924382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.924413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.924597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.924628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.924748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.924780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.924900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.924930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.925186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.925218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.925388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.925420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.925599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.925630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.782 [2024-11-19 10:55:26.925808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.782 [2024-11-19 10:55:26.925840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.782 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.925990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.926036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.926289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.926321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.926494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.926525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.926645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.926677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.926797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.926829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.927043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.927077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.927183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.927216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.927387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.927418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.927600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.927631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.927807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.927838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.928023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.928055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.928230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.928262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.928391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.928424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.928544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.928575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.928771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.928804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.928915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.928946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.929139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.929170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.929344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.929376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.929495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.929528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.929639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.929668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.929789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.929821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.930065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.930098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.930207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.930239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.930412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.930444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.930623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.930654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.930849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.930881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.931015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.931048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.931213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.931284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.931423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.931459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.931639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.931673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.931799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.931832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.932087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.932121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.932251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.932284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.932403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.932434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.932558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.932590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.932762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.932793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.932976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.933010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.783 [2024-11-19 10:55:26.933119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.783 [2024-11-19 10:55:26.933149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.783 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.933277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.933308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.933486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.933518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.933633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.933679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.933786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.933817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.933941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.933985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.934177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.934208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.934313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.934344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.934465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.934497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.934618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.934648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.934761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.934792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.934991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.935025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.935212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.935243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.935349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.935381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.935493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.935523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.935643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.935674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.935789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.935819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.935960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.935994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.936188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.936219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.936408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.936439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.936697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.936727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.936909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.936940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.937092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.937123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.937253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.937283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.937549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.937580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.937695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.937726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.937916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.937946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.938150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.938182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.938299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.938329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.938519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.938550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.938682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.938714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.938886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.938917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.939168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.939200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.939318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.939349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.784 qpair failed and we were unable to recover it. 00:28:19.784 [2024-11-19 10:55:26.939448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.784 [2024-11-19 10:55:26.939479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.939585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.939615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.939796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.939826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.939960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.939993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.940202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.940234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.940347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.940378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.940508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.940538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.940669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.940700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.940826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.940857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.940974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.941007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.941145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.941178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.941283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.941315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.941428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.941459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.941565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.941597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.941767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.941798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.942004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.942037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.942231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.942262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.942377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.942409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.942533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.942564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.942669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.942699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.942893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.942924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.943080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.943111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.943231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.943263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.943445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.943476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.943586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.943617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.943858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.943889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.944015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.944048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.944172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.944204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.944377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.944409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.944533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.944564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.944703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.944734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.944914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.944945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.945128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.945159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.945284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.945316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.945489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.945520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.945658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.945689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.945866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.945903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.946048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.785 [2024-11-19 10:55:26.946081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.785 qpair failed and we were unable to recover it. 00:28:19.785 [2024-11-19 10:55:26.946251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.946281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.946449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.946481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.946662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.946693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.946818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.946849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.946976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.947009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.947183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.947220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.947334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.947367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.947482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.947513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.947699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.947731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.947932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.947974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.948169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.948201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.948312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.948343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.948526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.948558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.948663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.948694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.948805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.948837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.949024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.949055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.949174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.949206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.949378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.949408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.949593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.949625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.949739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.949771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.949966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.949999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.950116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.950147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.950251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.950282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.950459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.950490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.950612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.950644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.950851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.950882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.950997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.951031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.951152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.951183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.951300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.951331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.951559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.951591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.951764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.951794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.951908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.951940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.952144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.952176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.952311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.952342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.952445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.952476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.952576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.952608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.952788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.952819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.786 [2024-11-19 10:55:26.953002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.786 [2024-11-19 10:55:26.953035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.786 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.953224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.953262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.953382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.953414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.953671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.953702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.953841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.953873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.953992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.954024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.954133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.954164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.954351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.954382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.954496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.954527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.954670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.954701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.954868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.954900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.955083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.955115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.955356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.955388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.955508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.955540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.955716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.955749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.955895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.955927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.956118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.956151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.956318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.956349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.956537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.956568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.956856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.956887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.957014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.957047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.957235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.957267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.957381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.957411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.957541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.957573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.957767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.957798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.957915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.957963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.958079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.958110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.958348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.958382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.958505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.958536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.958714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.958746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.958862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.958894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.959170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.959203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.959374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.959405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.959524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.959557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.959811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.959842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.960001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.960036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.960158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.960190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.960360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.960391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.787 [2024-11-19 10:55:26.960523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.787 [2024-11-19 10:55:26.960554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.787 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.960672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.960703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.960824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.960855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.960979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.961018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.961123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.961154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.961266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.961298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.961484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.961516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.961690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.961722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.961829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.961860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.962003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.962037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.962243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.962274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.962456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.962488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.962678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.962710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.962817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.962849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.963026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.963057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.963265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.963298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.963487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.963518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.963649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.963681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.963788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.963819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.963999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.964032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.964300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.964332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.964514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.964547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.964723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.964753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.964987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.965020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.965140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.965171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.965288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.965319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.965433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.965464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.965706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.965738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.965862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.965893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.966093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.966126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.966319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.966351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.966458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.966489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.966679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.966710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.966904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.966937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.967067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.967099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.967214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.967246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.967355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.967385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.967606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.967638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.788 qpair failed and we were unable to recover it. 00:28:19.788 [2024-11-19 10:55:26.967745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.788 [2024-11-19 10:55:26.967776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.968059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.968093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.968272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.968303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.968406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.968438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.968645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.968676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.968789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.968826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.969003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.969035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.969167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.969199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.969380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.969411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.969607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.969638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.969809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.969840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.970014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.970047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.970168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.970199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.970490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.970522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.970704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.970735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.970852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.970884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.971013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.971045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.971181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.971213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.971333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.971363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.971499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.971531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.971751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.971782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.971898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.971930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.972063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.972095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.972266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.972298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.972483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.972515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.972660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.972690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.972874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.972906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.973137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.973170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.973303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.973335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.973458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.973489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.973618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.973650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.973828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.973860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.974110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.789 [2024-11-19 10:55:26.974145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.789 qpair failed and we were unable to recover it. 00:28:19.789 [2024-11-19 10:55:26.974287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.974318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.974490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.974522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.974651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.974683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.974852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.974884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.975013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.975045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.975154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.975186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.975306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.975337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.975619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.975650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.975891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.975922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.976050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.976082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.976308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.976340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.976469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.976501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.976622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.976658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.976770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.976801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.976977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.977010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.977186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.977218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.977421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.977451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.977639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.977670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.977802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.977833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.978016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.978050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.978182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.978213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.978336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.978367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.978492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.978523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.978695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.978726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.978837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.978868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.979108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.979141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.979336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.979368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.979471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.979501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.979692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.979724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.979891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.979922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.980069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.980102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.980235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.980266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.980390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.980421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.980598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.980630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.980809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.980841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.980968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.981000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.981107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.981140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.981253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.790 [2024-11-19 10:55:26.981284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.790 qpair failed and we were unable to recover it. 00:28:19.790 [2024-11-19 10:55:26.981405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.981437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.981551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.981582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.981773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.981805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.981920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.981961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.982133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.982165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.982337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.982369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.982546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.982578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.982694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.982726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.982940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.982981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.983096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.983127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.983248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.983279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.983467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.983498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.983611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.983643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.983854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.983885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.984027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.984065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.984248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.984401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.984432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.984537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.984569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.984692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.984723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.984831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.984862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.984995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.985028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.985132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.985164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.985357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.985389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.985507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.985538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.985645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.985676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.985855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.985887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.986058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.986090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.986266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.986298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.986473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.986504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.986616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.986648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.986827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.986858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.986982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.987016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.987141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.987171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.987365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.987396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.987506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.987538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.987668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.987700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.987874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.987904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.988153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.791 [2024-11-19 10:55:26.988186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.791 qpair failed and we were unable to recover it. 00:28:19.791 [2024-11-19 10:55:26.988404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.988435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.988569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.988599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.988773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.988805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.988928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.988970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.989096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.989127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.989261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.989292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.989415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.989446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.989628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.989660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.989857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.989888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.990137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.990169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.990348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.990380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.990487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.990517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.990712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.990745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.990855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.990887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.990998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.991031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.991214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.991245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.991352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.991390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.991500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.991530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.991701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.991733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.991840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.991872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.992048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.992081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.992251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.992282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.992401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.992433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.992560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.992592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.992771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.992801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.992946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.993010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.993198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.993229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.993340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.993372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.993547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.993579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.993680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.993710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.993890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.993920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.994038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.994070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.994310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.994342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.994471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.994502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.994628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.994661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.994904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.994936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.995048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.995080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.792 [2024-11-19 10:55:26.995277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.792 [2024-11-19 10:55:26.995308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.792 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.995500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.995531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.995637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.995668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.995780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.995811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.995934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.995978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.996152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.996183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.996341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.996411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.996562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.996598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.996782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.996815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.996943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.996988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.997254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.997287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.997555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.997586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.997774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.997806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.997907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.997939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.998073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.998105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.998219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.998251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.998423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.998455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.998562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.998593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.998761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.998792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.998999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.999042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.999154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.999185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.999370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.999402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.999517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.999549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.999676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.999707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:26.999837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:26.999868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.000065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.000098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.000218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.000251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.000401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.000433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.000544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.000575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.000709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.000741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.000862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.000894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.001087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.001120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.001292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.001324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.001521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.001553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.793 [2024-11-19 10:55:27.001654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.793 [2024-11-19 10:55:27.001685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.793 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.001804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.001835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.002075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.002110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.002228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.002260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.002367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.002398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.002526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.002558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.002673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.002704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.002832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.002863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.002990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.003024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.003155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.003187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.003309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.003341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.003527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.003559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.003739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.003771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.003892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.003924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.004113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.004145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.004264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.004296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.004399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.004430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.004558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.004589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.004768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.004799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.004918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.004962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.005099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.005131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.005317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.005349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.005521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.005553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.005741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.005773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.005887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.005919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.006110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.006148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.006337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.006368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.006482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.006513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.006624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.006656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.006832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.006863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.007009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.007043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.007173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.007204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.007316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.007348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.007548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.007579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.007682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.007714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.007853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.007884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.008123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.008156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.008289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.794 [2024-11-19 10:55:27.008321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.794 qpair failed and we were unable to recover it. 00:28:19.794 [2024-11-19 10:55:27.008448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.008479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.008606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.008638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.008760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.008793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.008906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.008938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.009054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.009086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.009191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.009223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.009404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.009436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.009539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.009571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.009743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.009775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.009903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.009938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.010057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.010092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.010285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.010317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.010423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.010455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.010692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.010725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.010848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.010881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.011071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.011106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.011232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.011265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.011392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.011423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.011548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.011581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.011758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.011791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.011906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.011938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.012126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.012158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.012266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.012298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.012477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.012508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.012629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.012660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.012839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.012871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.012990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.013023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.013230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.013269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.013446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.013477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.013653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.013684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.013813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.013845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.013980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.014012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.014139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.014171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.014341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.014373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.014551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.014582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.014756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.014788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.014974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.015007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.015205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.795 [2024-11-19 10:55:27.015237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.795 qpair failed and we were unable to recover it. 00:28:19.795 [2024-11-19 10:55:27.015427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.015458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.015631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.015663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.015776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.015807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.015998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.016030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.016146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.016178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.016304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.016334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.016448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.016480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.016610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.016641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.016758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.016789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.016908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.016939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.017077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.017110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.017218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.017249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.017427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.017459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.017576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.017607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.017732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.017763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.017944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.017985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.018096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.018128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.018236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.018267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.018458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.018489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.018684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.018716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.018836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.018867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.019060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.019093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.019264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.019296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.019437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.019469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.019641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.019672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.019853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.019886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.020109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.020141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.020334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.020366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.020567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.020599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.020790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.020828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.020933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.020974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.021226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.021257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.021434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.021465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.021598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.021630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.021868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.021899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.022021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.022053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.022316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.796 [2024-11-19 10:55:27.022349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.796 qpair failed and we were unable to recover it. 00:28:19.796 [2024-11-19 10:55:27.022484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.022516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.022639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.022671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.022861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.022892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.023026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.023059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.023170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.023203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.023450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.023482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.023612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.023643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.023882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.023914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.024132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.024166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.024414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.024445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.024635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.024667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.024857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.024889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.025051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.025085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.025301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.025333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.025453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.025485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.025668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.025699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.025904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.025936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.026120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.026153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.026338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.026369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.026496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.026528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.026633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.026664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.026846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.026877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.026998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.027031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.027147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.027178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.027437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.027469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.027683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.027715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.027842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.027873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.027999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.028032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.028155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.028186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.028359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.028390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.028491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.028522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.028629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.028661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.028839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.028875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.029019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.029051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.029264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.029297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.029541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.029572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.029675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.029706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.797 [2024-11-19 10:55:27.029828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.797 [2024-11-19 10:55:27.029860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.797 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.029983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.030015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.030147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.030178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.030356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.030387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.030576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.030606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.030713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.030744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.030856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.030888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.031093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.031126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.031319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.031350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.031486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.031517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.031755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.031787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.031978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.032012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.032144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.032176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.032346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.032378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.032493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.032524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.032736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.032768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.032899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.032930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.033125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.033157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.033278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.033310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.033446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.033479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.033612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.033644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.033762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.033794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.034017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.034089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.034320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.034355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.034469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.034501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.034654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.034686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.034968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.035005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.035229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.035261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.035384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.035416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.035595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.035628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.035810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.035841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.035966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.036000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.036187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.036218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.036333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.036365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.798 [2024-11-19 10:55:27.036607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.798 [2024-11-19 10:55:27.036638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.798 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.036864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.036897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.037040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.037073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.037209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.037241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.037364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.037395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.037501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.037534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.037712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.037743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.037857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.037890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.038014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.038062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.038177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.038209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.038327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.038357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.038491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.038523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.038693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.038724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.038832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.038864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.038977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.039011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.039275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.039314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.039495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.039526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.039633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.039664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.039848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.039879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.040006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.040038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.040151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.040182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.040305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.040336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.040513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.040545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.040667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.040700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.040902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.040934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.041130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.041162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.041406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.041437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.041648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.041680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.041889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.041920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.042122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.042155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.042276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.042307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.042427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.042457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.042577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.042609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.042791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.042822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.043009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.043042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.043235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.043267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.043433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.043463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.043637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.043669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.799 [2024-11-19 10:55:27.043873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.799 [2024-11-19 10:55:27.043904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.799 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.044056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.044089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.044213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.044245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.044417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.044449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.044571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.044609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.044789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.044823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.044961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.044994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.045177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.045208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.045328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.045359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.045543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.045575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.045685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.045716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.045903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.045935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.046134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.046167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.046284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.046315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.046421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.046454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.046589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.046623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.046844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.046876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.047008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.047041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.047243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.047275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.047384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.047415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.047552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.047583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.047702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.047734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.047858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.047888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.048096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.048129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.048371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.048404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.048524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.048556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.048679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.048711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.048883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.048916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.049041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.049073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.049268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.049300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.049488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.049519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.049618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.049650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.049776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.049807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.049934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.049975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.050111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.050144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.050256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.050288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.050462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.050494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.050607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.050639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.050818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.800 [2024-11-19 10:55:27.050850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.800 qpair failed and we were unable to recover it. 00:28:19.800 [2024-11-19 10:55:27.050987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.051020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.051126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.051158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.051274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.051304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.051419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.051450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.051637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.051669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.051782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.051814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.052012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.052083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.052235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.052272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.052405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.052439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.052645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.052761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.052794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.052979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.053012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.053191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.053222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.053343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.053374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.053488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.053520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.053622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.053653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.053827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.053859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.053975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.054008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.054185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.054216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.054344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.054385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.054494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.054525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.054635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.054666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.054782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.054813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.054931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.054975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.055088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.055120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.055230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.055262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.055381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.055413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.055587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.055618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.055734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.055766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.055964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.055997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.056105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.056136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.056265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.056297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.056398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.056429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.056561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.056593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.056768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.056799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.056923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.056965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.057078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.057109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.801 [2024-11-19 10:55:27.057228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.801 [2024-11-19 10:55:27.057259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.801 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.057367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.057399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.057639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.057670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.057807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.057839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.058013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.058045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.058147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.058179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.058287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.058318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.058423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.058455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.058625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.058657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.058879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.058962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.059107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.059144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.059266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.059300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.059482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.059514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.061427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.061484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.061762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.061798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.061908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.061940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.062141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.062174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.062293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.062326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.062519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.062550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.062673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.062706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.062809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.062840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.063011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.063045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.063227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.063259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.063390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.063423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.063535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.063567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.063753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.063786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.063914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.063946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.064085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.064118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.064255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.064286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.064464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.064494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.064676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.064708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.064890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.064922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.065257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.065289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.065408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.065439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.065737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.065769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.065970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.066002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.066125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.066161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.066336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.066369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.802 [2024-11-19 10:55:27.066476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.802 [2024-11-19 10:55:27.066508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.802 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.066688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.066720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.066850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.066881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.067008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.067041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.067281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.067313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.067519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.067551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.067679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.067710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.067836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.067867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.068056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.068088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.068259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.068291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.068434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.068466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.068639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.068670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.068793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.068825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.068966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.068999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.069174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.069206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.069397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.069428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.069605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.069637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.069831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.069863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.070045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.070079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.070257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.070289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.070467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.070500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.070675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.070706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.070976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.071010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.071147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.071179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.071432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.071464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.071715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.071747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.071865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.071896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.072108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.072140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.072271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.072302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.072547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.072578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.072768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.072800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.072983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.073016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.073133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.073165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.073372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.073403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.803 qpair failed and we were unable to recover it. 00:28:19.803 [2024-11-19 10:55:27.073529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-19 10:55:27.073560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.073797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.073829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.073962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.073995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.074180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.074213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.074332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.074369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.074569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.074601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.074779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.074811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.074937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.074979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.075107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.075139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.075326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.075358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.075489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.075521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.075635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.075666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.075776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.075808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.076024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.076057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.076180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.076213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.076439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.076471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.076658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.076689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.076929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.076969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.077154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.077186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.077366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.077397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.077637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.077668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.077874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.077905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.078197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.078230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.078416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.078448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.078584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.078615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.078871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.078903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.079032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.079064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.079328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.079360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.079531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.079562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.079741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.079773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.079958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.079992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.080193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.080225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.080417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.080449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.080572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.080603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.080784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.080816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.081006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.081040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.081169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.081201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.081316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-19 10:55:27.081348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.804 qpair failed and we were unable to recover it. 00:28:19.804 [2024-11-19 10:55:27.081455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.081487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.081666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.081697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.081872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.081904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.082096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.082129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.082315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.082348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.082464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.082496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.082624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.082660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.082788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.082820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.082932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.082983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.083103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.083135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.083329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.083359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.083466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.083497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.083600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.083632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.083736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.083768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.083940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.083982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.084162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.084193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.084312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.084344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.084479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.084510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.084682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.084713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.084833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.084865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.085056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.085090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.085217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.085249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.085431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.085463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.085570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.085602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.085721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.085752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.085958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.085991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.086102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.086134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.086259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.086291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.086415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.086447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.086713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.086745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.086864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.086895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.087182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.087215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.087400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.087433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.087576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.087608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.087722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.087754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.087931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.087971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.088147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.088178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.088287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.805 [2024-11-19 10:55:27.088319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.805 qpair failed and we were unable to recover it. 00:28:19.805 [2024-11-19 10:55:27.088436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.088468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.088604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.088635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.088741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.088773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.089016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.089048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.089217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.089250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.089376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.089407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.089588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.089619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.089747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.089779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.089982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.090020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.090205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.090238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.090359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.090390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.090577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.090608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.090780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.090811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.090930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.090992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.091102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.091134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.091248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.091280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.091447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.091479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.091673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.091705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.091900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.091932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.092116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.092149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.092319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.092350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.092477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.092508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.092650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.092682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.092801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.092832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.093014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.093047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.093175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.093206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.093345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.093376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.093634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.093665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.093784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.093816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.093961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.093994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.094117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.094148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.094317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.094348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.094482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.094514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.094632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.094663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.094779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.094811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.094928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.094970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.095157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.095188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.806 qpair failed and we were unable to recover it. 00:28:19.806 [2024-11-19 10:55:27.095301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.806 [2024-11-19 10:55:27.095331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.095454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.095485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.095655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.095685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.095886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.095918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.096113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.096185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.096400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.096436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.096555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.096587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.096731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.096764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.096886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.096917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.097146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.097179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.097286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.097319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.097562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.097602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.097729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.097761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.097892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.097924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.098121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.098154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.098389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.098421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.098539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.098572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.098857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.098888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.099092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.099126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.099233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.099264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.099399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.099431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.099605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.099637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.099877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.099909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.100094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.100126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.100259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.100292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.100413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.100446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.100650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.100682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.100794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.100825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.101009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.101043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.101220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.101252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.101441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.101473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.101644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.101675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.101790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.101822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.102008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.102042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.807 [2024-11-19 10:55:27.102242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.807 [2024-11-19 10:55:27.102273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.807 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.102387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.102419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.102661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.102691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.102817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.102848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.102965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.102998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.103129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.103161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.103338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.103369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.103598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.103630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.103746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.103778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.103892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.103923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.104125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.104157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.104298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.104329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.104530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.104561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.104682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.104713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.104841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.104874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.105063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.105287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.105318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.105498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.105530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.105654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.105685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.105799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.105830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.106015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.106047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.106164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.106196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.106305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.106337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.106523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.106553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.106818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.106850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.107023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.107056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.107164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.107194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.107377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.107409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.107522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.107553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.107672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.107703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.107830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.107860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.108028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.108062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.108283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.108315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.108434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.108465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.108646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.108677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.108858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.108889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.109149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.109182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.109388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.109420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.808 [2024-11-19 10:55:27.109604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.808 [2024-11-19 10:55:27.109635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.808 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.109850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.109881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.110012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.110046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.110171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.110202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.110377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.110409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.110591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.110622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.110725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.110757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.110995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.111035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.111166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.111197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.111377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.111409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.111588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.111619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.111742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.111773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.111969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.112002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.112121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.112153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.112254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.112285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.112472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.112502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.112617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.112648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.112821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.112851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.112962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.112994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.113207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.113239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.113348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.113379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.113487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.113519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.113718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.113750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.113934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.113977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.114098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.114129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.114259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.114291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.114400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.114608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.114639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.114746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.114777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.114885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.114916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.115115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.115147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.115339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.115370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.115497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.115529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.115641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.115672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.115862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.115899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.116033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.116066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.116187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.116217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.116456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.116486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.809 [2024-11-19 10:55:27.116732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.809 [2024-11-19 10:55:27.116763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.809 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.116880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.116912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.117055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.117087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.117270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.117301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.117474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.117506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.117710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.117740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.117852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.117884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.117999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.118033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.118152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.118182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.118300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.118332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.118456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.118487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.118660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.118691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.118829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.118860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.119040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.119073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.119301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.119332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.119513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.119544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.119645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.119677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.119852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.119882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.120019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.120052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.120173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.120204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.120391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.120422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.120544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.120575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.120695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.120727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.120972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.121009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.121125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.121157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.121271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.121302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.121425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.121456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.121591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.121622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.121725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.121755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.121932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.121992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.122114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.122146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.122271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.122302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.122408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.122440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.122607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.122639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.122765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.122797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.122973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.123007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.123111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.123142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.123306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.123376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.123528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.123564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.123703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.123736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.123853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.123885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.810 qpair failed and we were unable to recover it. 00:28:19.810 [2024-11-19 10:55:27.124065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.810 [2024-11-19 10:55:27.124098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.124279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.124312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.124441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.124472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.124575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.124606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.124791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.124822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.125081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.125114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.125226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.125259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.125379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.125409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.125531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.125563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.125741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.125781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.125976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.126009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.126132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.126163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.126282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.126313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.126433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.126463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.126575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.126607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.126782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.126813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.127055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.127088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.127221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.127252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.127437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.127469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.127586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.127617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.127721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.127753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.127860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.127892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.128078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.128111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.128302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.128334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.128504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.128536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.128655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.128686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.128806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.128837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.128964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.128997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.129109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.129139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.129271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.129303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.129426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.129458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.129581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.129612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.129789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.129821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.129934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.129976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.130276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.130309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.130479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.130510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.130699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.130731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.130996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.131029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.131212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.131243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.131511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.131542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.131666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.131697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.811 qpair failed and we were unable to recover it. 00:28:19.811 [2024-11-19 10:55:27.131819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.811 [2024-11-19 10:55:27.131850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.131974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.132006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.132263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.132295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.132416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.132448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.132566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.132597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.132768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.132799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.132978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.133011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.133118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.133149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.133334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.133372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.133491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.133522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.133630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.133660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.133786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.133817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.133934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.133977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.134105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.134137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.134257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.134288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.134403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.134434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.134605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.134636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.134769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.134800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.134992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.135025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.135199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.135229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.135341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.135372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.135504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.135536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.135649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.135680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.135865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.135897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.136030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.812 [2024-11-19 10:55:27.136063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.812 qpair failed and we were unable to recover it. 00:28:19.812 [2024-11-19 10:55:27.136179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.136210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.136319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.136350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.136539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.136570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.136679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.136710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.136888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.136920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.137057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.137092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.137235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.137268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.137393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.137424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.137536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.137567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.137687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.137719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.137897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.137984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.138114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.138150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.138394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.138427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.138604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.138637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.138769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.138800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.138920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.138963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.139154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.139187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.139317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.139348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.139452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.139484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.139659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.139691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.139793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.139824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.139937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.139984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.140263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.140294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.140408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.140449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.140651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.140682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.140805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.140837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.141031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.141064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.141179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.141211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.141343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.141374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.141497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.141528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.141638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.141668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.141842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.141873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.141998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.142030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.142149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.142182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.142293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.142323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.142581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.813 [2024-11-19 10:55:27.142612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.813 qpair failed and we were unable to recover it. 00:28:19.813 [2024-11-19 10:55:27.142738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.142770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.142894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.142926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.143059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.143090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.143208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.143240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.143356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.143387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.143508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.143540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.143646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.143677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.143936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.143991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.144164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.144195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.144305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.144337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.144528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.144558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.144679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.144711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.144845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.144877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.145073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.145107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.145281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.145320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.145432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.145462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.145586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.145617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.145732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.145763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.145899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.145930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.146115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.146148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.146269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.146301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.146425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.146456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.146574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.146606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.146709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.146740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.146915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.146955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.147141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.147172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.147343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.147380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.147586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.147618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.147746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.147778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.147970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.148004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.148191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.148223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.148328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.148360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.148491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.148523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.148637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.148669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.148851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.148883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.148988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.149022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.814 qpair failed and we were unable to recover it. 00:28:19.814 [2024-11-19 10:55:27.149138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.814 [2024-11-19 10:55:27.149168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.149277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.149310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.149501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.149533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.149656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.149687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.149805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.149837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.149973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.150007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.150214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.150246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.150354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.150384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.150492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.150524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.150697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.150728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.150919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.150957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.151083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.151114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.151247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.151279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.151387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.151418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.151517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.151549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.151667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.151698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.151807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.151840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.151966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.152000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.152111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.152150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.152272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.152303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.152567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.152600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.152715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.152746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.152852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.152884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.153008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.153042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.153244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.153276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.153385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.153417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.153550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.153581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.153698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.153731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.153969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.154001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.154123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.154156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.154276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.154308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.154512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.154544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.154691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.154723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.154893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.154926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.155045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.155076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.155267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.155299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.155572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.815 [2024-11-19 10:55:27.155605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.815 qpair failed and we were unable to recover it. 00:28:19.815 [2024-11-19 10:55:27.155743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.155775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.155905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.155936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.156052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.156084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.156220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.156252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.156380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.156412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.156601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.156632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.156816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.156847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.156959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.156993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.157128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.157160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.157336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.157368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.157498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.157529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.157657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.157688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.157858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.157889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.158072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.158106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.158237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.158269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.158377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.158409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.158525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.158557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.158758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.158789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.158915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.158958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.159064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.159097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.159268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.159300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.159405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.159442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.159556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.159587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.159699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.159731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.159835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.159867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.160007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.160044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.160226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.160257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.160366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.160398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.160523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.160554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.160733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.160764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.160957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.160990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.161178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.161210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.161317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.161348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.161455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.161487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.161589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.161621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.161753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.161785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.816 qpair failed and we were unable to recover it. 00:28:19.816 [2024-11-19 10:55:27.161896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.816 [2024-11-19 10:55:27.161927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.162125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.162159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.162327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.162358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.162475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.162507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.162621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.162653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.162765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.162796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.163003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.163036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.163138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.163170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.163342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.163373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.163481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.163513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.163630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.163662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.163857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.163889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.164041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.164074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.164192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.164224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.164347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.164379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.164489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.164521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.164641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.164673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.164859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.164891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.165026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.165058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.165179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.165212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.165352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.165384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.165491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.165522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.165626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.165658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.165767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.165799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.165916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.165958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.166066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.166104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.166288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.166320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.166440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.166471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.166653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.166685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.166790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.166822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.817 [2024-11-19 10:55:27.166963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.817 [2024-11-19 10:55:27.166996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.817 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.167120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.167151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.167265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.167297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.167427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.167459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.167578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.167609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.167722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.167753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.167938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.167996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.168104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.168132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.168226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.168254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.168383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.168411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.168524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.168552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.168726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.168754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.168922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.168959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.169163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.169191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.169373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.169400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.169582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.169610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.169796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.169824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.170021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.170051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.170226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.170255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.170432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.170460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.170638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.170666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.170870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.170899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.171040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.171070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.171167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.171195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.171391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.171420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.171598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.171629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.171847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.171877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.171994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.172026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.172221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.172250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.172365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.172393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.172500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.172528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.172633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.172662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.172837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.172865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.818 [2024-11-19 10:55:27.172998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.818 [2024-11-19 10:55:27.173028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.818 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.173154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.173183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.173296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.173329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.173496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.173524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.173779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.173807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.174053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.174083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.174250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.174280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.174537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.174564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.174756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.174784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.175016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.175046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.175166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.175195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.175302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.175331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.175508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.175537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.175730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.175761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.175887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.175918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.176103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.176135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.176281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.176312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.176481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.176512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.176703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.176734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.176848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.176879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.177070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.177103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.177295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.177327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.177512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.177542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.177666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.177697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.180012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.180048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.180171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.180202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.180382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.180414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.180527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.180558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.180800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.180830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.181023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.181055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.181244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.181275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.181539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.181570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.181740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.181771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.181900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.181931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.819 qpair failed and we were unable to recover it. 00:28:19.819 [2024-11-19 10:55:27.182070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.819 [2024-11-19 10:55:27.182102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.182218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.182249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.182353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.182383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.182554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.182585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.182784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.182816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.183061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.183094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.183268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.183299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.183433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.183464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.183601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.183638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.183758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.183790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.183914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.183944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.184207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.184239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.184409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.184440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.184621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.184652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.184833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.184864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.185052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.185096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.185234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.185265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.185475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.185506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.185750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.185782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.185906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.185937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.186061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.186092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.186201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.186232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.186366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.186398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.186513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.186544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.186801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.186832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.187016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.187049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.187313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.187345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.187475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.187507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.187679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.187711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.187904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.187936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.188137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.188168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.820 [2024-11-19 10:55:27.188280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.820 [2024-11-19 10:55:27.188311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.820 qpair failed and we were unable to recover it. 00:28:19.821 [2024-11-19 10:55:27.188439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.821 [2024-11-19 10:55:27.188471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.821 qpair failed and we were unable to recover it. 00:28:19.821 [2024-11-19 10:55:27.188665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.821 [2024-11-19 10:55:27.188696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.821 qpair failed and we were unable to recover it. 00:28:19.821 [2024-11-19 10:55:27.188875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.821 [2024-11-19 10:55:27.188908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.821 qpair failed and we were unable to recover it. 00:28:19.821 [2024-11-19 10:55:27.189044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.821 [2024-11-19 10:55:27.189077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.821 qpair failed and we were unable to recover it. 00:28:19.821 [2024-11-19 10:55:27.189186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.821 [2024-11-19 10:55:27.189217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.821 qpair failed and we were unable to recover it. 00:28:19.821 [2024-11-19 10:55:27.189408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.821 [2024-11-19 10:55:27.189440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:19.821 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.189545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.189575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.189764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.189796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.189928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.189967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.190170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.190203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.190309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.190340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.190444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.190474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.190588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.190619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.190731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.190761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.190876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.102 [2024-11-19 10:55:27.190907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.102 qpair failed and we were unable to recover it. 00:28:20.102 [2024-11-19 10:55:27.191129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.191163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.191400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.191437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.191562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.191594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.191768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.191799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.191917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.191960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.192139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.192171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.192290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.192321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.192424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.192454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.192630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.192661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.192840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.192872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.193115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.193147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.193279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.193310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.193492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.193522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.193627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.193657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.193775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.193806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.193928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.193969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.194072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.194103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.194244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.194275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.194391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.194422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.194605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.194635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.194808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.194839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.195019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.195051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.195237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.195268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.195522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.195556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.195698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.195729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.195860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.195893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.196037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.196076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.196251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.196283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.196498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.196568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.196712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.196749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.196942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.196992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.197120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.197153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.197342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.197377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.197528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.197560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.197689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.197722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.197835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.197867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.198036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.198070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.103 qpair failed and we were unable to recover it. 00:28:20.103 [2024-11-19 10:55:27.198184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.103 [2024-11-19 10:55:27.198215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.198383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.198415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.198599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.198630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.198805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.198836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.198946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.198999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.199217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.199249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.199356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.199388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.199489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.199520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.199624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.199656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.199841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.199873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.200088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.200121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.200308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.200339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.200453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.200485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.200661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.200693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.200819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.200851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.200980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.201014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.201122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.201153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.201266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.201298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.201500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.201532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.201766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.201798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.201990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.202024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.202154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.202186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.202356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.202387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.202556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.202588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.202762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.202794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.202982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.203015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.203229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.203262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.203391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.203422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.203681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.203712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.203884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.203915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.204102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.204140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.204354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.204387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.204569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.204600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.204781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.204812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.205009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.205043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.205220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.205252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.205365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.205397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.104 [2024-11-19 10:55:27.205525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.104 [2024-11-19 10:55:27.205556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.104 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.205692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.205724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.205831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.205862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.206055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.206088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.206209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.206241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.206360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.206390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.206583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.206615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.206817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.206854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.206967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.207000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.207177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.207209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.207427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.207458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.207645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.207677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.207787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.207818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.208009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.208043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.208256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.208288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.208472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.208503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.208615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.208647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.208768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.208799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.208926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.208968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.209107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.209138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.209267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.209299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.209419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.209450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.209577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.209609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.209848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.209879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.210056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.210090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.210264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.210295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.210401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.210433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.210600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.210631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.210750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.210781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.210960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.210994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.211108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.211140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.211248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.211279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.211493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.211525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.211637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.211669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.211938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.211984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.212164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.212197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.212340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.212371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.212500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.212532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.105 qpair failed and we were unable to recover it. 00:28:20.105 [2024-11-19 10:55:27.212715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.105 [2024-11-19 10:55:27.212748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.212991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.213025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.213208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.213239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.213355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.213387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.213493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.213524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.213716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.213748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.213878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.213910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.214025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.214058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.214263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.214294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.214536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.214573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.214692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.214723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.214970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.215003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.215117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.215148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.215323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.215355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.215555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.215586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.215777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.215809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.215987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.216019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.216274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.216306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.216413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.216443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.216617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.216649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.216829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.216860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.217106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.217139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.217350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.217381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.217608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.217640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.217832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.217863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.218038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.218071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.218276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.218307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.218511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.218543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.218751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.218782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.218898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.218929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.219114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.219147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.219262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.219292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.219413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.219445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.219572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.219604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.219709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.219741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.219922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.219965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.220182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.220216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.220419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.106 [2024-11-19 10:55:27.220451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.106 qpair failed and we were unable to recover it. 00:28:20.106 [2024-11-19 10:55:27.220592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.220624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.220812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.220845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.221021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.221053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.221182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.221214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.221400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.221431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.221622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.221653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.221773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.221805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.221978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.222011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.222205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.222237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.222351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.222384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.222506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.222537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.222711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.222749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.222966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.222999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.223119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.223150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.223337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.223370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.223540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.223570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.223828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.223860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.224055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.224087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.224274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.224306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.224495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.224527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.224754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.224786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.224907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.224938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.225074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.225105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.225210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.225242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.225419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.225449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.225685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.225717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.225994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.226028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.226216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.226248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.226364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.226395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.226590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.226622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.226797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.226828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.227067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.227099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.107 [2024-11-19 10:55:27.227363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.107 [2024-11-19 10:55:27.227396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.107 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.227632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.227664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.227857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.227888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.228084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.228117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.228303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.228336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.228466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.228498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.228687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.228719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.228967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.229000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.229116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.229148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.229387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.229418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.229614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.229647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.229898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.229929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.230148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.230181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.230285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.230317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.230486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.230518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.230767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.230798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.230971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.231004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.231172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.231204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.231389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.231420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.231538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.231576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.231759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.231790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.231977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.232009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.232180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.232212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.232418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.232450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.232573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.232604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.232885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.232916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.233112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.233145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.233270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.233301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.233510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.233542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.233803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.233834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.234021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.234053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.234317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.234348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.234564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.234597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.234798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.234831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.234944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.234988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.235159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.235190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.235393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.235424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.235600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.235632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.108 [2024-11-19 10:55:27.235863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.108 [2024-11-19 10:55:27.235894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.108 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.236139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.236172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.236274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.236305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.236567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.236599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.236735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.236766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.236962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.236995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.237178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.237208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.237376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.237408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.237672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.237705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.237876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.237908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.238165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.238198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.238455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.238486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.238676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.238708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.238945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.238989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.239196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.239227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.239434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.239465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.239655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.239688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.239927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.239966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.240232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.240264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.240463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.240493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.240732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.240764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.240958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.240998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.241175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.241206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.241320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.241351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.241556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.241588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.241717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.241748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.241935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.241997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.242189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.242222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.242482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.242513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.242749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.242780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.242971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.243005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.243279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.243310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.243444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.243482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.243601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.243632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.243814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.243845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.244091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.244125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.244228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.244261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.244428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.109 [2024-11-19 10:55:27.244460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.109 qpair failed and we were unable to recover it. 00:28:20.109 [2024-11-19 10:55:27.244644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.244675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.244935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.244988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.245104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.245135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.245270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.245301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.245564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.245594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.245785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.245815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.246006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.246039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.246326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.246358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.246596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.246627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.246801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.246832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.246969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.247008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.247273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.247304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.247430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.247464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.247581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.247613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.247729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.247760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.247961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.247994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.248199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.248231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.248335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.248367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.248490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.248521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.248651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.248682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.248917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.248959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.249082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.249114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.249295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.249326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.249461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.249493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.249626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.249657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.249838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.249869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.250048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.250081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.250251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.250283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.250539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.250570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.250782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.250814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.251005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.251037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.251163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.251195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.251373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.251405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.251613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.251644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.251826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.251857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.252144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.252178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.252369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.252401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.252582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.110 [2024-11-19 10:55:27.252615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.110 qpair failed and we were unable to recover it. 00:28:20.110 [2024-11-19 10:55:27.252848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.252879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.253070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.253104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.253309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.253340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.253523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.253555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.253823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.253854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.254027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.254060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.254253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.254285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.254489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.254521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.254639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.254670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.254857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.254889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.255135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.255168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.255350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.255381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.255568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.255605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.255778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.255809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.256075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.256108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.256325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.256583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.256614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.256809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.256841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.257118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.257151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.257351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.257382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.257575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.257606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.257842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.257874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.258012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.258045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.258249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.258281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.258565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.258597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.258768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.258801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.259045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.259079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.259265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.259297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.259486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.259518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.259813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.259845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.260032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.260064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.260326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.260359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.260543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.111 [2024-11-19 10:55:27.260573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.111 qpair failed and we were unable to recover it. 00:28:20.111 [2024-11-19 10:55:27.260756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.260788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.260906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.260937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.261144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.261176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.261349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.261380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.261515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.261547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.261718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.261750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.261889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.261921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.262047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.262079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.262291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.262323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.262507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.262539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.262658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.262689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.262865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.262897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.263147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.263180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.263286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.263317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.263556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.263588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.263826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.263857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.264057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.264090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.264263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.264295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.264422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.264454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.264715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.264751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.264935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.264976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.265150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.265182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.265289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.265321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.265566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.265598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.265858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.265890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.266039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.266071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.266276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.266308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.266570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.266601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.266726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.266757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.266880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.266911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.267056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.267089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.267209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.267240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.267355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.267386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.267584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.267616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.267797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.267828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.268093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.268126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.268323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.268355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.268596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.112 [2024-11-19 10:55:27.268628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.112 qpair failed and we were unable to recover it. 00:28:20.112 [2024-11-19 10:55:27.268764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.268795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.268933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.268973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.269090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.269121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.269363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.269394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.269578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.269609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.269788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.269819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.269998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.270030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.270219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.270251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.270387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.270418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.270597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.270629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.270809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.270842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.271012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.271045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.271282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.271313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.271499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.271531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.271713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.271744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.271982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.272016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.272217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.272248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.272369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.272400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.272573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.272605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.272806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.272837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.273044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.273077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.273244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.273281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.273425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.273456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.273583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.273616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.273879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.273910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.274027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.274060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.274299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.274330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.274569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.274600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.274809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.274839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.274946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.274987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.275175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.275206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.275326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.275358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.275478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.275509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.275694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.275726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.275895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.275926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.276127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.276161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.276425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.113 [2024-11-19 10:55:27.276456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.113 qpair failed and we were unable to recover it. 00:28:20.113 [2024-11-19 10:55:27.276575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.276607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.276867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.276899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.277088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.277119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.277315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.277347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.277470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.277501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.277624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.277656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.277791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.277822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.278002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.278035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.278171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.278204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.278389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.278421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.278611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.278642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.278759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.278791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.278966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.278999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.279271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.279303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.279561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.279592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.279732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.279764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.280001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.280034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.280272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.280303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.280504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.280535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.280666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.280699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.280936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.280977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.281157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.281189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.281362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.281392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.281574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.281605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.281866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.281904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.282154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.282189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.282311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.282342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.282523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.282555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.282742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.282774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.283037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.283070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.283189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.283221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.283342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.283373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.283625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.283656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.283840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.283871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.284008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.284041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.284230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.284262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.284442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.284473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.284650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.114 [2024-11-19 10:55:27.284682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.114 qpair failed and we were unable to recover it. 00:28:20.114 [2024-11-19 10:55:27.284957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.284990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.285172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.285204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.285323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.285355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.285536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.285568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.285758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.285789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.286031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.286064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.286190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.286222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.286470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.286502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.286745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.286776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.287055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.287088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.287269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.287300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.287486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.287753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.287784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.287986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.288020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.288146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.288177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.288367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.288398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.288512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.288544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.288721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.288754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.288871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.288902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.289122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.289155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.289415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.289447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.289711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.289743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.289938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.289978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.290175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.290207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.290395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.290426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.290685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.290718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.290933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.290979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.291239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.291271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.291457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.291488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.291700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.291732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.292000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.292032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.292162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.292194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.292376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.292407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.292596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.292628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.292865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.292897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.293127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.293159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.293347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.293378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.115 [2024-11-19 10:55:27.293501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.115 [2024-11-19 10:55:27.293533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.115 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.293713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.293744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.293871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.293903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.294089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.294122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.294235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.294268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.294446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.294477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.294648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.294679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.294875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.294908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.295102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.295135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.295311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.295342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.295525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.295557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.295665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.295696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.295831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.295863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.296111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.296144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.296415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.296447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.296635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.296665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.296900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.296933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.297122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.297153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.297366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.297398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.297652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.297683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.297860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.297892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.298087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.298120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.298304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.298335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.298571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.298602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.298719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.298751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.299011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.299043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.299298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.299330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.299510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.299541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.299721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.299753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.299996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.300036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.300158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.300190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.300372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.116 [2024-11-19 10:55:27.300403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.116 qpair failed and we were unable to recover it. 00:28:20.116 [2024-11-19 10:55:27.300590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.300622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.300804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.300835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.301017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.301051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.301249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.301280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.301534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.301566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.301769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.301801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.301978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.302009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.302256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.302288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.302412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.302444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.302679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.302711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.302891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.302921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.303178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.303211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.303473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.303504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.303702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.303734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.303855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.303886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.304156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.304189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.304372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.304404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.304652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.304684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.304958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.304992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.305254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.305285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.305456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.305487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.305733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.305764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.305935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.305977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.306162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.306193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.306311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.306343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.306533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.306564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.306694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.306725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.307006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.307039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.307165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.307197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.307458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.307489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.307619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.307650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.307823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.307856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.307982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.308014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.308124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.308156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.308283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.308313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.308580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.308612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.308720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.308751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.308922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.117 [2024-11-19 10:55:27.308969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.117 qpair failed and we were unable to recover it. 00:28:20.117 [2024-11-19 10:55:27.309142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.309174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.309362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.309393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.309629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.309661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.309843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.309874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.310113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.310146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.310328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.310359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.310621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.310653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.310835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.310866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.311084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.311117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.311303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.311334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.311518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.311550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.311740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.311771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.311979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.312014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.312216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.312249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.312419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.312451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.312703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.312735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.312976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.313009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.313250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.313282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.313521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.313553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.313736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.313767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.313904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.313935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.314184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.314217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.314408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.314439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.314615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.314647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.314820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.314852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.315136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.315169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.315344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.315413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.315695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.315732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.315993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.316027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.316236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.316268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.316454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.316486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.316669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.316701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.316915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.316946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.317212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.317244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.317363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.317395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.317580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.317612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.317849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.317881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.118 [2024-11-19 10:55:27.318091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.118 [2024-11-19 10:55:27.318125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.118 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.318355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.318387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.318669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.318709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.318885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.318917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.319109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.319145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.319264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.319296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.319578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.319609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.319848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.319880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.320058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.320091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.320223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.320255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.320425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.320456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.320707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.320739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.321005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.321037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.321250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.321282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.321410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.321440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.321566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.321598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.321730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.321762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.321893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.321926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.322118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.322150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.322325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.322357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.322475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.322506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.322705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.322738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.322903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.322934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.323082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.323115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.323352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.323384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.323589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.323620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.323809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.323841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.324016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.324049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.324182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.324214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.324558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.324629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.324827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.324863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.325129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.325165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.325361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.325393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.325581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.325612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.325879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.325911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.326142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.326175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.326417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.326449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.326581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.326612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.119 [2024-11-19 10:55:27.326731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.119 [2024-11-19 10:55:27.326762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.119 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.326959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.326993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.327167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.327199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.327440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.327472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.327644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.327675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.327853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.327884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.328026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.328059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.328324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.328356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.328472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.328503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.328685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.328716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.328908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.328940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.329088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.329119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.329380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.329412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.329596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.329628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.329814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.329846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.330030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.330063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.330246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.330278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.330405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.330436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.330624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.330661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.330911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.330943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.331144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.331176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.331373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.331405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.331594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.331626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.331759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.331791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.331971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.332005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.332274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.332307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.332428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.332459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.332700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.332732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.332922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.332963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.333102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.333133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.333315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.333346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.333529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.333561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.333708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.333741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.333860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.333891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.334102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.334135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.334375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.334407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.334589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.334620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.334859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.334890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.335005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.120 [2024-11-19 10:55:27.335038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.120 qpair failed and we were unable to recover it. 00:28:20.120 [2024-11-19 10:55:27.335285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.335317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.335496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.335527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.335738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.335770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.336015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.336048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.336243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.336275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.336412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.336443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.336683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.336720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.336903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.336935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.337132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.337164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.337270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.337302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.337493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.337524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.337707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.337738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.337923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.337964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.338083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.338298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.338329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.338443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.338475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.338647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.338678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.338847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.338879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.339014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.339047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.339238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.339269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.339387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.339540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.339571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.339752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.339784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.339992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.340026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.340199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.340230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.340428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.340459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.340708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.340739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.340941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.340983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.341242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.341274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.341454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.341486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.341595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.341626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.341766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.341797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.342034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.342067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.342202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.121 [2024-11-19 10:55:27.342240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.121 qpair failed and we were unable to recover it. 00:28:20.121 [2024-11-19 10:55:27.342344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.342376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.342615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.342646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.342761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.342793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.342930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.342982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.343112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.343144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.343322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.343353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.343528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.343560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.343671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.343702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.343883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.343915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.344106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.344139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.344398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.344430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.344549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.344581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.344762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.344794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.345016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.345050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.345176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.345206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.345391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.345423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.345683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.345715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.345972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.346004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.346138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.346169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.346354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.346386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.346561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.346602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.346740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.346772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.346944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.346986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.347173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.347204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.347486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.347517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.347650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.347681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.347856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.347888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.348085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.348117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.348284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.348316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.348436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.348467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.348653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.348684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.348910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.348941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.349122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.349153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.349322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.349353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.349478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.349509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.349785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.349816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.350012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.350045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.350236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.122 [2024-11-19 10:55:27.350270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.122 qpair failed and we were unable to recover it. 00:28:20.122 [2024-11-19 10:55:27.350406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.350437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.350560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.350592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.350850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.350921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.351204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.351240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.351432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.351464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.351595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.351628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.351782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.351965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.351999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.352195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.352226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.352485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.352517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.352791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.352823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.353012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.353046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.353189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.353221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.353450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.353481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.353726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.353758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.353930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.353978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.354195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.354227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.354501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.354532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.354671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.354702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.354877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.354909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.355107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.355139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.355384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.355416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.355554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.355585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.355819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.355849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.356077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.356111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.356375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.356407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.356671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.356702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.356822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.356854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.357037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.357070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.357216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.357248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.357427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.357459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.357650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.357682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.357866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.357898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.358041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.358074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.358357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.358389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.358582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.358613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.358889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.358921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.123 [2024-11-19 10:55:27.359121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.123 [2024-11-19 10:55:27.359154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.123 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.359342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.359374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.359612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.359644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.359912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.359944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.360076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.360108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.360288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.360320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.360452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.360483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.360690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.360721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.360844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.360875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.361111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.361144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.361261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.361292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.361463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.361494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.361682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.361714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.361925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.361962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.362090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.362122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.362378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.362410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.362672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.362703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.362896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.362927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.363174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.363205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.363341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.363372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.363615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.363647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.363761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.363792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.363916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.363954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.364203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.364235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.364421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.364453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.364649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.364680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.364847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.364879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.365004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.365036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.365234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.365265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.365471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.365503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.365708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.365739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.365916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.365962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.366177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.366208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.366452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.366484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.366653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.366685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.366957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.366990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.367175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.367206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.367449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.367480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.124 [2024-11-19 10:55:27.367655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.124 [2024-11-19 10:55:27.367686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.124 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.367876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.367907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.368159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.368193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.368448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.368480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.368615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.368648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.368838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.368870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.369140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.369172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.369412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.369450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.369649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.369682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.369885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.369917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.370160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.370193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.370377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.370409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.370676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.370707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.370955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.370988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.371225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.371257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.371436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.371467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.371636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.371668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.371913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.371944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.372086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.372117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.372299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.372330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.372449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.372480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.372676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.372708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.372825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.372856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.373144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.373177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.373367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.373398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.373603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.373634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.373832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.373864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.374149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.374182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.374285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.374316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.374503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.374534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.374735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.374766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.374868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.374900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.375180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.375212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.375413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.125 [2024-11-19 10:55:27.375444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.125 qpair failed and we were unable to recover it. 00:28:20.125 [2024-11-19 10:55:27.375590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.375621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.375756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.375787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.375976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.376008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.376200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.376230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.376419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.376451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.376633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.376664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.376907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.376938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.377075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.377106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.377291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.377322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.377574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.377604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.377789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.377821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.378063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.378097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.378278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.378309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.378556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.378593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.378834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.378866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.379056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.379088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.379273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.379305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.379433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.379465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.379579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.379609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.379781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.379813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.379935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.379975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.380193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.380224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.380327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.380358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.380563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.380594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.380785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.380817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.381054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.381087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.381198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.381228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.381419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.381451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.381625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.381656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.381834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.381865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.382130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.382163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.382286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.382319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.382497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.382529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.382764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.382795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.383056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.383090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.383354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.383385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.383561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.383592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.126 [2024-11-19 10:55:27.383827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.126 [2024-11-19 10:55:27.383858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.126 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.384052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.384085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.384224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.384256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.384440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.384471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.384654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.384688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.384956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.384989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.385114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.385146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.385329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.385360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.385570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.385602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.385804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.385835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.386083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.386116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.386251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.386283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.386495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.386527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.386747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.386778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.386895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.386927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.387112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.387144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.387326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.387363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.387545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.387576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.387883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.387915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.388160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.388194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.388470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.388503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.388685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.388716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.388992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.389026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.389195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.389226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.389443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.389473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.389645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.389675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.389825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.389857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.390071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.390105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.390226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.390257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.390494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.390526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.390711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.390743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.390917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.390978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.391103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.391135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.391305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.391336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.391573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.391604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.391844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.391875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.392132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.392165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.392453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.392486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.127 qpair failed and we were unable to recover it. 00:28:20.127 [2024-11-19 10:55:27.392752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.127 [2024-11-19 10:55:27.392783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.393031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.393064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.393303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.393333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.393595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.393628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.393811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.393842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.394086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.394119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.394365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.394397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.394635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.394667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.394850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.394880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.395171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.395205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.395451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.395482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.395675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.395707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.395976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.396009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.396296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.396328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.396610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.396641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.396845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.396876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.396991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.397024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.397217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.397248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.397420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.397458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.397629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.397661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.397918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.397956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.398148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.398180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.398352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.398384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.398635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.398667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.398921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.398959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.399198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.399230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.399420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.399452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.399663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.399694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.399879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.399912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.400134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.400167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.400360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.400391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.400607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.400639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.400848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.400879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.401063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.401097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.401335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.401367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.401623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.401655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.401802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.401834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.402128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.128 [2024-11-19 10:55:27.402161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.128 qpair failed and we were unable to recover it. 00:28:20.128 [2024-11-19 10:55:27.402426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.402459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1850917 Killed "${NVMF_APP[@]}" "$@" 00:28:20.129 [2024-11-19 10:55:27.402758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.402790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.403042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.403075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:20.129 [2024-11-19 10:55:27.403364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.403398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.403623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.403658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:20.129 [2024-11-19 10:55:27.403863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.403895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.129 [2024-11-19 10:55:27.404129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.404164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.404353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.129 [2024-11-19 10:55:27.404385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.404560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.404591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.129 [2024-11-19 10:55:27.404784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.404816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.405068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.405102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.405307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.405337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.405617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.405649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.405943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.405986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.406111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.406143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.406335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.406366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.406502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.406533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.406732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.406764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.406955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.406987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.407111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.407143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.407330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.407363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.407564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.407596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.407728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.407760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.407977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.408012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.408152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.408183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.408400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.408432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.408619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.408650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.408831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.408863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.409068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.409101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.409317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.409350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.409478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.409510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.409722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.409755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.409940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.409981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.410180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.410212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.410470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.410503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.129 qpair failed and we were unable to recover it. 00:28:20.129 [2024-11-19 10:55:27.410617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.129 [2024-11-19 10:55:27.410648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.410858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.410890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.411035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.411068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.411258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.411291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1851646 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.411465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.411537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1851646 00:28:20.130 [2024-11-19 10:55:27.411747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:20.130 [2024-11-19 10:55:27.411785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.412029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1851646 ']' 00:28:20.130 [2024-11-19 10:55:27.412064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.412200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.412242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.130 [2024-11-19 10:55:27.412429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.412462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.130 [2024-11-19 10:55:27.412730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.412764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.130 [2024-11-19 10:55:27.413036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.413071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.130 [2024-11-19 10:55:27.413262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.413297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.130 [2024-11-19 10:55:27.413567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.413602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.413781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.413813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.413995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.414030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.414238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.414270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.414484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.414517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.414642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.414672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.414829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.414860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.415059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.415093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.415347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.415380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.415570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.415602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.415794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.415825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.416002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.416035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.416236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.416269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.416458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.416490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.416763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.416795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.416936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.416981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.417173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.417206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.130 [2024-11-19 10:55:27.417490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.130 [2024-11-19 10:55:27.417522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.130 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.417690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.417722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.417897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.417934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.418186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.418219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.418456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.418487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.418762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.418793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.418975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.419009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.419128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.419160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.419355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.419387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.419593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.419626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.419814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.419847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.420056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.420090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.420286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.420318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.420446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.420479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.420761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.420793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.420911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.420944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.421182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.421215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.421348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.421381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.421560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.421593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.421777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.421809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.421990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.422025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.422213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.422246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.422487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.422520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.422717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.422749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.423014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.423049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.423289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.423321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.423553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.423585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.423793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.423825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.424049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.424081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.424328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.424372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.424645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.424677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.424857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.424890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.425016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.425050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.425225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.425256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.425379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.425413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.425524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.425556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.425831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.425862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.425988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.131 [2024-11-19 10:55:27.426021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.131 qpair failed and we were unable to recover it. 00:28:20.131 [2024-11-19 10:55:27.426230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.426262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.426387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.426418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.426598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.426630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.426818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.426852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.426987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.427026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.427267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.427299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.427414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.427445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.427646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.427678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.427849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.427880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.428121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.428154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.428340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.428371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.428560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.428592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.428773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.428806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.428941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.428983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.429277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.429309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.429426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.429459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.429587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.429618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.429756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.429789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.429921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.429960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.430133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.430165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.430347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.430378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.430483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.430516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.430650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.430682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.430890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.430923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.431066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.431099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.431294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.431327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.431437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.431468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.431600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.431633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.431839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.431871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.431988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.432022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.432149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.432181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.432368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.432401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.432586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.432617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.432813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.432845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.433056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.433090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.433292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.433323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.433458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.433490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.132 [2024-11-19 10:55:27.433671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.132 [2024-11-19 10:55:27.433703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.132 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.433973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.434007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.434150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.434182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.434315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.434347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.434522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.434553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.434682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.434714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.435034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.435067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.435249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.435287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.435540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.435572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.435777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.435808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.435925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.435968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.436094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.436125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.436300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.436332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.436573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.436604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.436847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.436879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.437003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.437036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.437245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.437278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.437404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.437435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.437550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.437581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.437701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.437732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.437848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.437879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.437994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.438027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.438136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.438168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.438275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.438305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.438550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.438581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.438698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.438729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.438855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.438887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.439031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.439064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.439258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.439289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.439408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.439441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.439632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.439664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.439772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.439804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.439985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.440019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.440220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.440251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.440369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.440400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.440580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.440612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.440814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.440847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.441055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.441088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.133 qpair failed and we were unable to recover it. 00:28:20.133 [2024-11-19 10:55:27.441272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.133 [2024-11-19 10:55:27.441304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.441413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.441445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.441563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.441594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.441785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.441817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.441960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.441994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.442126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.442158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.442359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.442391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.442496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.442529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.442704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.442736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.442916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.442961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.443205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.443236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.443365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.443397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.443576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.443607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.443784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.443816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.444029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.444062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.444167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.444198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.444318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.444349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.444545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.444576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.444769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.444800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.445056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.445089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.445286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.445318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.445512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.445543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.445737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.445785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.446034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.446066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.446196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.446227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.446421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.446453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.446723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.446754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.446852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.446883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.447119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.447151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.447334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.447366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.447482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.447512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.447682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.447714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.447829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.447860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.134 [2024-11-19 10:55:27.448099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.134 [2024-11-19 10:55:27.448133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.134 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.448393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.448423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.448692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.448724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.448856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.448887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.449099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.449132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.449339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.449370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.449632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.449663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.449904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.449936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.450076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.450108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.450344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.450374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.450550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.450582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.450754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.450785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.450927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.450970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.451157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.451188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.451387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.451419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.451622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.451653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.451849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.451887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.452012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.452045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.452342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.452374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.452479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.452509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.452602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.452634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.452854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.452885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.453085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.453118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.453370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.453402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.453580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.453611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.453874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.453905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.454052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.454103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.454340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.454373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.454551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.454583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.454706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.454736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.454913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.454964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.455215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.455247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.455442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.455473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.455586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.455618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.455811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.455842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.456036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.456069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.456367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.456397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.456615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.135 [2024-11-19 10:55:27.456647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.135 qpair failed and we were unable to recover it. 00:28:20.135 [2024-11-19 10:55:27.456781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.456813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.457078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.457112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.457305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.457336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.457457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.457488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.457673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.457704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.457982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.458015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.458187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.458218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.458453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.458486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.458677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.458707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.458912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.458944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.459060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.459092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.459328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.459360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.459600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.459632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.459803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.459835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.459941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.460004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.460281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.460313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.460391] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:28:20.136 [2024-11-19 10:55:27.460442] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.136 [2024-11-19 10:55:27.460528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.460560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.460764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.460800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.460987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.461018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.461306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.461336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.461537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.461569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.461798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.461829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.462095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.462131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.462306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.462339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.462547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.462580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.462759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.462792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.462911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.462944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.463126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.463159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.463414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.463446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.463635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.463668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.463773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.463806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.464074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.464107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.464215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.464248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.464512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.464544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.464814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.464847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.465035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.465069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.465198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.465231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.465368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.136 [2024-11-19 10:55:27.465400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.136 qpair failed and we were unable to recover it. 00:28:20.136 [2024-11-19 10:55:27.465513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.465545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.465719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.465753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.465926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.465966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.466091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.466124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.466370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.466403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.466623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.466655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.466854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.466887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.467068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.467103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.467235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.467267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.467421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.467453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.467656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.467689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.467922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.467962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.468136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.468169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.468313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.468347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.468522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.468554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.468693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.468726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.468917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.468960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.469069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.469101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.469298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.469330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.469544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.469582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.469767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.469799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.469992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.470026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.470267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.470300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.470415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.470447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.470552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.470586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.470708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.470740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.470960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.470993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.471111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.471155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.471343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.471376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.471481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.471514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.471704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.471736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.471862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.471895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.472181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.472214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.472462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.472494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.472613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.472645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.472820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.472853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.472972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.473005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.473298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.473334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.473449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.473481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.473702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.473735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.473940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.473993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.474107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.474139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.137 [2024-11-19 10:55:27.474311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.137 [2024-11-19 10:55:27.474343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.137 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.474448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.474480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.474657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.474690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.474812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.474844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.475100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.475135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.475258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.475290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.475574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.475607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.475798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.475832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.476007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.476041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.476218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.476251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.476495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.476527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.476699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.476731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.476925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.476976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.477237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.477270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.477536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.477569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.477706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.477739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.477918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.477962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.478079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.478117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.478235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.478267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.478441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.478474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.478592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.478625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.478809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.478842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.479028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.479062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.479179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.479211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.479328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.479361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.479570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.479603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.479722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.479756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.479867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.479899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.480111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.480146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.480403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.480434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.480548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.480581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.480759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.480833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.481123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.481162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.481383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.481417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.481526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.481559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.481741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.481775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.481898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.481940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.138 [2024-11-19 10:55:27.482082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.138 [2024-11-19 10:55:27.482116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.138 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.482240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.482274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.482389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.482422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.482529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.482563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.482748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.482782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.482919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.482963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.483155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.483189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.483326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.483359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.483546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.483580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.483758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.483792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.139 [2024-11-19 10:55:27.483977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.139 [2024-11-19 10:55:27.484013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.139 qpair failed and we were unable to recover it. 00:28:20.140 [2024-11-19 10:55:27.484255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.140 [2024-11-19 10:55:27.484289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.140 qpair failed and we were unable to recover it. 00:28:20.140 [2024-11-19 10:55:27.484472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.140 [2024-11-19 10:55:27.484505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.140 qpair failed and we were unable to recover it. 00:28:20.141 [2024-11-19 10:55:27.484745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.141 [2024-11-19 10:55:27.484778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.141 qpair failed and we were unable to recover it. 00:28:20.141 [2024-11-19 10:55:27.484972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.141 [2024-11-19 10:55:27.485006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.141 qpair failed and we were unable to recover it. 00:28:20.141 [2024-11-19 10:55:27.485149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.141 [2024-11-19 10:55:27.485183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.141 qpair failed and we were unable to recover it. 00:28:20.141 [2024-11-19 10:55:27.485377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.141 [2024-11-19 10:55:27.485410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.141 qpair failed and we were unable to recover it. 00:28:20.141 [2024-11-19 10:55:27.485647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.141 [2024-11-19 10:55:27.485681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.141 qpair failed and we were unable to recover it. 00:28:20.141 [2024-11-19 10:55:27.485873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.141 [2024-11-19 10:55:27.485907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.141 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.486106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.486141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.486275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.486310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.486516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.486550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.486749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.486782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.486912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.486946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.487146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.487181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.487379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.487412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.487678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.487712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.487914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.487958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.488084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.488117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.142 [2024-11-19 10:55:27.488314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.142 [2024-11-19 10:55:27.488347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.142 qpair failed and we were unable to recover it. 00:28:20.143 [2024-11-19 10:55:27.488531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.143 [2024-11-19 10:55:27.488565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.143 qpair failed and we were unable to recover it. 00:28:20.143 [2024-11-19 10:55:27.488739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.143 [2024-11-19 10:55:27.488773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.143 qpair failed and we were unable to recover it. 00:28:20.143 [2024-11-19 10:55:27.488908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.143 [2024-11-19 10:55:27.488941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.143 qpair failed and we were unable to recover it. 00:28:20.143 [2024-11-19 10:55:27.489133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.143 [2024-11-19 10:55:27.489168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.143 qpair failed and we were unable to recover it. 00:28:20.143 [2024-11-19 10:55:27.489352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.143 [2024-11-19 10:55:27.489391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.143 qpair failed and we were unable to recover it. 00:28:20.143 [2024-11-19 10:55:27.489564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.143 [2024-11-19 10:55:27.489598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.143 qpair failed and we were unable to recover it. 00:28:20.144 [2024-11-19 10:55:27.489726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.144 [2024-11-19 10:55:27.489759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.144 qpair failed and we were unable to recover it. 00:28:20.144 [2024-11-19 10:55:27.489873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.144 [2024-11-19 10:55:27.489905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.144 qpair failed and we were unable to recover it. 00:28:20.144 [2024-11-19 10:55:27.490124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.144 [2024-11-19 10:55:27.490158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.144 qpair failed and we were unable to recover it. 00:28:20.144 [2024-11-19 10:55:27.490425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.144 [2024-11-19 10:55:27.490458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.144 qpair failed and we were unable to recover it. 00:28:20.144 [2024-11-19 10:55:27.490598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.144 [2024-11-19 10:55:27.490631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.144 qpair failed and we were unable to recover it. 00:28:20.144 [2024-11-19 10:55:27.490898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.144 [2024-11-19 10:55:27.490931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.144 qpair failed and we were unable to recover it. 00:28:20.144 [2024-11-19 10:55:27.491141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.144 [2024-11-19 10:55:27.491174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.144 qpair failed and we were unable to recover it. 00:28:20.144 [2024-11-19 10:55:27.491471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.144 [2024-11-19 10:55:27.491504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.144 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.491694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.491727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.491832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.491866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.491984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.492018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.492262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.492296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.492519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.492553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.492744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.492778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.492906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.492939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.493063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.493097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.493226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.145 [2024-11-19 10:55:27.493259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.145 qpair failed and we were unable to recover it. 00:28:20.145 [2024-11-19 10:55:27.493381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.146 [2024-11-19 10:55:27.493413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.146 qpair failed and we were unable to recover it. 00:28:20.146 [2024-11-19 10:55:27.493594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.146 [2024-11-19 10:55:27.493627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.146 qpair failed and we were unable to recover it. 00:28:20.146 [2024-11-19 10:55:27.493803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.146 [2024-11-19 10:55:27.493837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.146 qpair failed and we were unable to recover it. 00:28:20.146 [2024-11-19 10:55:27.493967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.146 [2024-11-19 10:55:27.494001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.146 qpair failed and we were unable to recover it. 00:28:20.146 [2024-11-19 10:55:27.494258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.146 [2024-11-19 10:55:27.494290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.146 qpair failed and we were unable to recover it. 00:28:20.146 [2024-11-19 10:55:27.494422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.146 [2024-11-19 10:55:27.494454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.146 qpair failed and we were unable to recover it. 00:28:20.146 [2024-11-19 10:55:27.494634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.146 [2024-11-19 10:55:27.494667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.147 qpair failed and we were unable to recover it. 00:28:20.147 [2024-11-19 10:55:27.494791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.147 [2024-11-19 10:55:27.494824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.147 qpair failed and we were unable to recover it. 00:28:20.147 [2024-11-19 10:55:27.495015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.147 [2024-11-19 10:55:27.495062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.147 qpair failed and we were unable to recover it. 00:28:20.147 [2024-11-19 10:55:27.495186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.147 [2024-11-19 10:55:27.495217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.147 qpair failed and we were unable to recover it. 00:28:20.147 [2024-11-19 10:55:27.495456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.147 [2024-11-19 10:55:27.495489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.147 qpair failed and we were unable to recover it. 00:28:20.147 [2024-11-19 10:55:27.495599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.147 [2024-11-19 10:55:27.495632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.147 qpair failed and we were unable to recover it. 00:28:20.147 [2024-11-19 10:55:27.495807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.147 [2024-11-19 10:55:27.495840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.147 qpair failed and we were unable to recover it. 00:28:20.147 [2024-11-19 10:55:27.495969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.496003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.148 [2024-11-19 10:55:27.496206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.496239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.148 [2024-11-19 10:55:27.496510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.496542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.148 [2024-11-19 10:55:27.496786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.496819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.148 [2024-11-19 10:55:27.497029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.497064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.148 [2024-11-19 10:55:27.497249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.497283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.148 [2024-11-19 10:55:27.497453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.497486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.148 [2024-11-19 10:55:27.497687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.497721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.148 [2024-11-19 10:55:27.497887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.148 [2024-11-19 10:55:27.497920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.148 qpair failed and we were unable to recover it. 00:28:20.149 [2024-11-19 10:55:27.498110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.149 [2024-11-19 10:55:27.498144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.149 qpair failed and we were unable to recover it. 00:28:20.149 [2024-11-19 10:55:27.498405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.149 [2024-11-19 10:55:27.498438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.149 qpair failed and we were unable to recover it. 00:28:20.149 [2024-11-19 10:55:27.498543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.149 [2024-11-19 10:55:27.498576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.149 qpair failed and we were unable to recover it. 00:28:20.149 [2024-11-19 10:55:27.498695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.149 [2024-11-19 10:55:27.498728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.149 qpair failed and we were unable to recover it. 00:28:20.149 [2024-11-19 10:55:27.498902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.149 [2024-11-19 10:55:27.498934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.149 qpair failed and we were unable to recover it. 00:28:20.149 [2024-11-19 10:55:27.499209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.499243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.150 [2024-11-19 10:55:27.499437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.499470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.150 [2024-11-19 10:55:27.499641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.499674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.150 [2024-11-19 10:55:27.499815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.499849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.150 [2024-11-19 10:55:27.499956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.499991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.150 [2024-11-19 10:55:27.500167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.500200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.150 [2024-11-19 10:55:27.500392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.500425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.150 [2024-11-19 10:55:27.500626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.500660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.150 [2024-11-19 10:55:27.500790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.150 [2024-11-19 10:55:27.500829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.150 qpair failed and we were unable to recover it. 00:28:20.151 [2024-11-19 10:55:27.501019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.151 [2024-11-19 10:55:27.501054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.151 qpair failed and we were unable to recover it. 00:28:20.151 [2024-11-19 10:55:27.501236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.151 [2024-11-19 10:55:27.501269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.151 qpair failed and we were unable to recover it. 00:28:20.151 [2024-11-19 10:55:27.501476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.151 [2024-11-19 10:55:27.501509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.151 qpair failed and we were unable to recover it. 00:28:20.151 [2024-11-19 10:55:27.501771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.151 [2024-11-19 10:55:27.501805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.151 qpair failed and we were unable to recover it. 00:28:20.151 [2024-11-19 10:55:27.501933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.151 [2024-11-19 10:55:27.501976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.151 qpair failed and we were unable to recover it. 00:28:20.151 [2024-11-19 10:55:27.502109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.151 [2024-11-19 10:55:27.502142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-11-19 10:55:27.502335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.152 [2024-11-19 10:55:27.502369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-11-19 10:55:27.502499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.152 [2024-11-19 10:55:27.502533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-11-19 10:55:27.502816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.152 [2024-11-19 10:55:27.502849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-11-19 10:55:27.503040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.152 [2024-11-19 10:55:27.503074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-11-19 10:55:27.503243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.152 [2024-11-19 10:55:27.503276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-11-19 10:55:27.503461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.152 [2024-11-19 10:55:27.503494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-11-19 10:55:27.503668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.152 [2024-11-19 10:55:27.503701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-11-19 10:55:27.503803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.152 [2024-11-19 10:55:27.503836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.504120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.504155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.504324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.504358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.504483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.504516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.504709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.504742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.504866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.504899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.505041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.505074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.505185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.505219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.505354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.505387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.505530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.505563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.505680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.505714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.505925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.505968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.506257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.506291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.506530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.506562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.506779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.506812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.506944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.507001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.507210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.507242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.507359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.507393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.507518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.507551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.507660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.507693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.507868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.507902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.508130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.508163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.508308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.508341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.508509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.508543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.508665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.508699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.508883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.508916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.509133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.509167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.509360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.509400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.509534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.509567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.509749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.509782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.509945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.509990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.510228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.510261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.510380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.510413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.510589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.510622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.510820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.510852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.510999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.511035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.153 [2024-11-19 10:55:27.511217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.153 [2024-11-19 10:55:27.511251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.511379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.511413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.511601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.511634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.511842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.511875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.512114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.512150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.512340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.512374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.512498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.512532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.512707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.512740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.512978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.513288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.513322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.513428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.513461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.513636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.513670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.513791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.513824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.513993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.514027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.514197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.514231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.514423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.514457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.514590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.514624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.514823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.514856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.515045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.515085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.515261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.515295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.515482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.515516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.515647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.515681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.515945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.515988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.516104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.516138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.516313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.516346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.516469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.516501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.516760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.516794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.516976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.517011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.517198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.517231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.517367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.517400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.517584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.517617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.517809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.517843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.518089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.518124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.154 [2024-11-19 10:55:27.518350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.154 [2024-11-19 10:55:27.518383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.154 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.518568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.518601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.518799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.518832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.518973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.519007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.519185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.519219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.519322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.519355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.519616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.519650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.519783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.519816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.519998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.520033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.520201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.520234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.520537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.520570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.520746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.520780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.520969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.521009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.521130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.521163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.521426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.521460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.521647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.521681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.521866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.521899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.522019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.522053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.522154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.522187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.155 [2024-11-19 10:55:27.522367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.155 [2024-11-19 10:55:27.522400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.155 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.522534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.522567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.522806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.522839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.523036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.523070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.523175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.523209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.523397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.523430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.523627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.523661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.523906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.523940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.524080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.524114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.524308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.524342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.524533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.524566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.524720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.524926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.524970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.525178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.525211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.525325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.525358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.525493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.525527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.525642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.525676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.525942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.525986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.526187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.526221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.526396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.526428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.526549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.526582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.526713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.526746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.526916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.526960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.527225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.527258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.527453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.527487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.527663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.527696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.527812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.527846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.528027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.528063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.528311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.528345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.528582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.528615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.528734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.528768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.528899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.528932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.529063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.529096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.529278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.156 [2024-11-19 10:55:27.529312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.156 qpair failed and we were unable to recover it. 00:28:20.156 [2024-11-19 10:55:27.529368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6af0 (9): Bad file descriptor 00:28:20.439 [2024-11-19 10:55:27.529654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.529724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.529877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.529914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.530170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.530242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.530503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.530539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.530684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.530718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.530940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.530990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.531129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.531162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.531278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.531311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.531519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.531551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.531736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.531769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.531941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.531987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.532181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.532213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.532422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.532454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.532653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.532686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.532912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.532944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.533130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.533163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.533421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.533452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.533594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.533627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.533854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.533897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.534188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.534228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.534363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.534397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.534522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.534559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.534777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.534812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.534989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.535026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.439 qpair failed and we were unable to recover it. 00:28:20.439 [2024-11-19 10:55:27.535201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.439 [2024-11-19 10:55:27.535243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.535441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.535475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.535716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.535757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.535961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.535994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.536202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.536236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.536359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.536392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.536662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.536695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.536888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.536920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.537122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.537157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.537366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.537399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.537657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.537690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.537829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.537862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.538050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.538083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.538201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.538233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.538426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.538459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.538722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.538754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.538934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.538980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.539174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.539208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.539392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.539425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.539559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.539593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.539783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.539816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.540028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.540062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.540252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.540285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.540488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.540520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.540797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.540829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.541013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.541046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.541236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.541269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.541507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.541539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.541677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.541711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.541852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.541886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.542079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.542112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.542284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.542317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.542522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.542555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.542748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.542781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.542890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.542922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.543173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.543206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.543388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.543420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.543541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.440 [2024-11-19 10:55:27.543574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.440 qpair failed and we were unable to recover it. 00:28:20.440 [2024-11-19 10:55:27.543747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.543778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.544043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.544078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.544213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.544246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.544429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.544469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.544647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.544694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.544937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.544979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.545106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.545138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.545329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.545361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.545563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.545597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.545733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.545765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.545955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.441 [2024-11-19 10:55:27.545984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.546016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.546205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.546238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.546421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.546454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.546587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.546621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.546888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.546922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.547049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.547083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.547266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.547299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.547572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.547622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.547748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.547780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.547978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.548014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.548258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.548290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.548475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.548508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.548640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.548672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.548874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.548908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.549150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.549185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.549426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.549459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.549629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.549662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.549842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.549875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.550085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.550119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.550306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.550341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.550450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.550483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.550747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.550781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.551023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.551056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.551295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.551328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.551503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.551536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.551709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.551744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.551857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.441 [2024-11-19 10:55:27.551890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.441 qpair failed and we were unable to recover it. 00:28:20.441 [2024-11-19 10:55:27.552013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.552048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.552250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.552284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.552416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.552449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.552623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.552656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.552780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.552815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.552930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.552972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.553102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.553136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.553376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.553447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.553602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.553645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.553905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.553940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.554140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.554173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.554366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.554399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.554537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.554570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.554755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.554788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.554899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.554933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.555183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.555217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.555418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.555451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.555629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.555662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.555834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.555867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.556042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.556076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.556265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.556307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.556499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.556533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.556706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.556739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.556990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.557025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.557205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.557238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.557410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.557443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.557583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.557617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.557812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.557846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.558028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.558063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.558210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.558243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.558374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.558407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.558672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.558706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.558835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.558869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.559133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.559167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.559311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.559345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.559609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.559642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.559776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.559809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.442 qpair failed and we were unable to recover it. 00:28:20.442 [2024-11-19 10:55:27.559934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.442 [2024-11-19 10:55:27.559980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.560222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.560256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.560510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.560544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.560740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.560774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.560966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.561000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.561192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.561225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.561426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.561460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.561699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.561734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.561919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.561961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.562173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.562208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.562430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.562471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.562718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.562751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.562968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.563003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.563196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.563230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.563416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.563449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.563623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.563657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.563790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.563823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.564005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.564040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.564247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.564281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.564494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.564529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.564663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.564696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.564896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.564930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.565209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.565243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.565461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.565511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.565740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.565773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.565963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.565999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.566204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.566236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.566451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.566483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.566723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.566756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.566997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.567033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.567227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.567261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.567475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.567509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.443 [2024-11-19 10:55:27.567749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.443 [2024-11-19 10:55:27.567781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.443 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.568070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.568106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.568284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.568317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.568515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.568548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.568666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.568699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.568831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.568864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.569081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.569115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.569358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.569392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.569581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.569614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.569820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.569853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.570114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.570148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.570332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.570366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.570604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.570638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.570826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.570860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.570990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.571025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.571207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.571241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.571360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.571393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.571517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.571550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.571752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.571796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.571997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.572035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.572218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.572256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.572454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.572488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.572590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.572623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.572863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.572897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.573044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.573078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.573252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.573286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.573523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.573557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.573746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.573780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.574041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.574077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.574323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.574357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.574596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.574630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.574869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.574903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.575036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.575072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.575277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.575310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.575515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.575548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.575783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.575818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.444 [2024-11-19 10:55:27.575997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.444 [2024-11-19 10:55:27.576033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.444 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.576215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.576249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.576501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.576535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.576787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.576821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.577037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.577073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.577191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.577225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.577403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.577436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.577644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.577677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.577861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.577894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.578095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.578137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.578312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.578346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.578523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.578556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.578819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.578853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.578976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.579011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.579200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.579234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.579404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.579437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.579635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.579669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.579865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.579897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.580098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.580133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.580370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.580403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.580593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.580627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.580807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.580841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.581093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.581129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.581334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.581368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.581549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.581583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.581822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.581856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.581978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.582013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.582199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.582233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.582415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.582448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.582565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.582598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.582790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.582824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.582942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.582991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.583175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.583210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.583313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.583347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.583473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.583507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.583685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.583718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.583845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.583884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.584060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.584097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.445 qpair failed and we were unable to recover it. 00:28:20.445 [2024-11-19 10:55:27.584211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.445 [2024-11-19 10:55:27.584245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.584526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.584562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.584756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.584791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.584896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.584930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.585056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.585090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.585267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.585300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.585513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.585549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.585674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.585708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.585910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.585956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.586147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.586183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.586363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.586398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.586664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.586700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.586831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.586866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.586983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.587019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.587188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.446 [2024-11-19 10:55:27.587194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.587221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.446 [2024-11-19 10:55:27.587230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.446 [2024-11-19 10:55:27.587229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 wit[2024-11-19 10:55:27.587237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.446 h addr=10.0.0.2, port=4420 00:28:20.446 [2024-11-19 10:55:27.587244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.587493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.587525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.587702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.587736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.587983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.588016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.588279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.588314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.588574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.588608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.588859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.588893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.588897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:20.446 [2024-11-19 10:55:27.589004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:20.446 [2024-11-19 10:55:27.589105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.589111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:20.446 [2024-11-19 10:55:27.589143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.589112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:20.446 [2024-11-19 10:55:27.589344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.589383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.589644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.589678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.589888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.589922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.590071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.590104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.590246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.590280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.590401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.590434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.590700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.590734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.590941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.590987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.591274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.591309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.591514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.591548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.591673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.591707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.591829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.591861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.591968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.592004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.446 qpair failed and we were unable to recover it. 00:28:20.446 [2024-11-19 10:55:27.592287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.446 [2024-11-19 10:55:27.592322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.592512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.592546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.592728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.592762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.592963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.592998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.593238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.593272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.593685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.593722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.593842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.593995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.594183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.594217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.594396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.594431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.594614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.594647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.594829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.594863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.595067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.595102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.595366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.595402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.595601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.595635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.595764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.595807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.596093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.596130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.596395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.596430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.596700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.596734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.596867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.596902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.597038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.597073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.597254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.597288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.597463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.597497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.597732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.597766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.597968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.598005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.598245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.598278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.598476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.598509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.598701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.598735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.598922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.598965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.599162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.599196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.599315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.599349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.599611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.599645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.599854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.599889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.600013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.600048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.600256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.600289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.600571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.600605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.600805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.600839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.601100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.601136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.601268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.601303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.447 qpair failed and we were unable to recover it. 00:28:20.447 [2024-11-19 10:55:27.601493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.447 [2024-11-19 10:55:27.601527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.601793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.601826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.602006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.602041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.602236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.602270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.602508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.602543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.602681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.602715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.602984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.603021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.603285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.603321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.603564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.603599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.603731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.603766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.603865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.603898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.604090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.604126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.604368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.604402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.604642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.604676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.604866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.604901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.605177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.605214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.605349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.605384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.605713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.605770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.605966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.606002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.606270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.606305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.606488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.606521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.606641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.606675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.606869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.606903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.607182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.607217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.607338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.607371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.607555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.607588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.607787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.607822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.608023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.608058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.608299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.608334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.608555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.608589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.608872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.608915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.609275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.609327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.448 [2024-11-19 10:55:27.609577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.448 [2024-11-19 10:55:27.609612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.448 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.609750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.609784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.610028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.610063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.610304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.610336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.610548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.610583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.610768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.610801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.610980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.611014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.611302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.611335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.611568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.611601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.611849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.611881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.612090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.612124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.612309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.612342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.612545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.612578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.612760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.612793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.613022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.613056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.613281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.613313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.613480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.613512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.613772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.613805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.613999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.614032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.614293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.614326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.614603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.614636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.614839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.614871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.615063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.615097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.615306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.615338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.615592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.615624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.615960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.616020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.616299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.616333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.616541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.616572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.616804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.616837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.617081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.617115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.617376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.617409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.617695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.617727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.617979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.618013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.618211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.449 [2024-11-19 10:55:27.618243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.449 qpair failed and we were unable to recover it. 00:28:20.449 [2024-11-19 10:55:27.618505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.618537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.618828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.618860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.618986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.619019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.619259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.619290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.619521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.619563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.619818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.619849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.620113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.620147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.620431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.620463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.620655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.620688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.620877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.620909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.621184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.621219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.621414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.621448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.621702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.621735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.621912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.621945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.622198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.622231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.622438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.622470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.622663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.622696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.622968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.623003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.623281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.623313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.623586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.623617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.623839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.623872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.624046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.624082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.624300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.624333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.624508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.624540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.624723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.624755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.625003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.625038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.625287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.625319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.625608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.625641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.625910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.625943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.626264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.626296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.626539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.626571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.626795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.626834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.627031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.627064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.627356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.627389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.627528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.627563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.627831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.450 [2024-11-19 10:55:27.627864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.450 qpair failed and we were unable to recover it. 00:28:20.450 [2024-11-19 10:55:27.628112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.628147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.628398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.628432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.628635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.628667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.628881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.628914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.629207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.629245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.629501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.629533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.629820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.629853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.630122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.630157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.630425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.630464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.630729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.630762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.630971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.631006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.631273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.631307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.631582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.631616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.631833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.631866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.632003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.632039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.632234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.632268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.632459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.632495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.632757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.632790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.633065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.633100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.633383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.633418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.633604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.633639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.633882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.633918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.634227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.634263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.634532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.634568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.634773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.634808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.634987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.635021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.635293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.635326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.635507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.635539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.635749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.635783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.635995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.636032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.636281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.636317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.636571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.636603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.636890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.636926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.637124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.637159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.637428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.637463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.637721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.637777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.637898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.637933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.451 [2024-11-19 10:55:27.638208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.451 [2024-11-19 10:55:27.638242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.451 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.638504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.638539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.638728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.638764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.638960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.638995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.639188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.639222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.639488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.639524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.639708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.639741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.639963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.639998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.640244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.640278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.640534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.640567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.640807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.640841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.641083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.641119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.641393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.641427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.641550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.641584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.641769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.641803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.641986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.642021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.642236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.642269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.642462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.642497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.642748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.642783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.643050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.643087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.643354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.643391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.643616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.643655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.643922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.643970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.644239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.644278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.644468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.644503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.644688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.644731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.645001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.645037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.645224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.645259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.645516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.645550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.645791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.645825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.646089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.646126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.646309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.646343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.646533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.646566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.646687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.646731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.646915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.646958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.647223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.647257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.647378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.647410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.647618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.647652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.452 [2024-11-19 10:55:27.647914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.452 [2024-11-19 10:55:27.647958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.452 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.648243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.648277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.648469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.648500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.648754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.648786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.648983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.649018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.649192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.649224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.649433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.649466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.649677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.649710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.649897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.649929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.650240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.650275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.650522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.650555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.650861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.650893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.651085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.651119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.651242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.651274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.651537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.651577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.651751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.651784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.652051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.652085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.652378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.652411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.652599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.652632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.652918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.652961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.653184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.653216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.653423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.653456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.653723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.653755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.653967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.654001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.654211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.654244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.654461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.654493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.654786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.654819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.655085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.655120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.655399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.655433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.655710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.655743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.655881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.655913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.656177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.656212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.656400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.656432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.453 [2024-11-19 10:55:27.656610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.453 [2024-11-19 10:55:27.656643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.453 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.656751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.656783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.657046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.657080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.657318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.657351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.657563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.657596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.657726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.657758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.658021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.658054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.658325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.658358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.658554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.658587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.658843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.658876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.659051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.659085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.659377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.659410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.659516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.659549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.659822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.659855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.660118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.660152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.660293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.660325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.660587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.660620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.660838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.660871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.661043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.661077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.661258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.661291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.661472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.661505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.661769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.661802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.661960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.662019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.662298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.662339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.662529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.662562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.662734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.662767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.663031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.663066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.663354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.663387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.663580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.663613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.663902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.663935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.664190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.664223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.664414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.664446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.664705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.664738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.665000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.665033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.665176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.665209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.665401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.665442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.665727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.665760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.454 [2024-11-19 10:55:27.665973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.454 [2024-11-19 10:55:27.666007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.454 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.666230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.666263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.666455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.666487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.666739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.666772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.667012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.667046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.667337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.667370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.667577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.667609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.667798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.667831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.668095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.668129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.668246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.668279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.668520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.668552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.668790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.668823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.669097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.669132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.669307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.669339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.669524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.669557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.669821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.669854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.670099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.670133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.670391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.670425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.670710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.670742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.670982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.671017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.671144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.671176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.671441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.671475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.671739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.671773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.672016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.672050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.672174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.672207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.672490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.672536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.672803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.672836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.673046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.673083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.673350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.673383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.673595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.673628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.673763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.673795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.674064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.674098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.674356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.674390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.674601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.674634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.674866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.674899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.675122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.675156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.675415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.675448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.675726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.455 [2024-11-19 10:55:27.675758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.455 qpair failed and we were unable to recover it. 00:28:20.455 [2024-11-19 10:55:27.675934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.675984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.676224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.676256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.676463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.676495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.676764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.676798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.676990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.677025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.677278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.677311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.677492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.677525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.677790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.677823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.678111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.678145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.678329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.678362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.678605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.678638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.678767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.678800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.679061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.679094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.679367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.679400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.679685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.679716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.679854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.679887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.680137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.680170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.680416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.680449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.680634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.680667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.680849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.680881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.681126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.681160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.681450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.681483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.681755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.681788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.681978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.682012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.682265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.682297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.682483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.682515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.682627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.682659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.682858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.682906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.683194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.683228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.683372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.683404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.683642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.683674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.683922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.683965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.684253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.684286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.684552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.684585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.684776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.684808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.684985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.685020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.685287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.685318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.685606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.456 [2024-11-19 10:55:27.685638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.456 qpair failed and we were unable to recover it. 00:28:20.456 [2024-11-19 10:55:27.685909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.685941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.686154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.686187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.686364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.686402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.686714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.686745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.686963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.686996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.687261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.687294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.687489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.687520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.687695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.687727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.687968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.688002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.688171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.688203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.688410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.688441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.688705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.688737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.688986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.689020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.689161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.689193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.689317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.689348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.689610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.689643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.689914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.689946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.690232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.690264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.690535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.690567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.690832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.690865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.691150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.691184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.691373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.691404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.691589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.691621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.691796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.691827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.692101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.692135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.692404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.692436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.692620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.692652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.692917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.692961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.693219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.693250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.693468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.693506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.693762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.693794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.694035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.694069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:20.457 [2024-11-19 10:55:27.694268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.694302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 [2024-11-19 10:55:27.694473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.694504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.457 [2024-11-19 10:55:27.694766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.457 [2024-11-19 10:55:27.694799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.457 qpair failed and we were unable to recover it. 00:28:20.457 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.457 [2024-11-19 10:55:27.695041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.695074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.458 [2024-11-19 10:55:27.695315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.695347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.695602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.695634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.695823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.695855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.696028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.696061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.696354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.696392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.696584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.696617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.696904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.696936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.697229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.697264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.697436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.697466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.697716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.697748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.698034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.698069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.698263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.698295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.698534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.698566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.698760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.698792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.699059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.699093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.699283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.699315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.699575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.699608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.699795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.699827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.700028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.700063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.700254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.700287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.700550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.700581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.700866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.700898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.701193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.701226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.701485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.701518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.701812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.701845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.702112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.702147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.702400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.702433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.702619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.702652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.702868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.702901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.703194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.703227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.703424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.703455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.703776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.703813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.458 [2024-11-19 10:55:27.704009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.458 [2024-11-19 10:55:27.704043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.458 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.704290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.704322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.704538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.704570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.704831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.704864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.705083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.705119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.705358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.705391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.705643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.705676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.705866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.705899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.706042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.706075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.706337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.706371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.706595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.706629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.706846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.706880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.707128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.707169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.707361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.707395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.707530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.707562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.707805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.707839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.707976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.708010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.708132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.708164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.708427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.708459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.708721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.708754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.708884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.708915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6de0000b90 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.709146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.709194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.709339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.709373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.709631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.709664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.709839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.709872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.710062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.710097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.710236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.710270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.710531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.710565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.710745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.710779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.711004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.711039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.711232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.711267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.711477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.711511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.711640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.711673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.711857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.711890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.712108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.459 [2024-11-19 10:55:27.712142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.459 qpair failed and we were unable to recover it. 00:28:20.459 [2024-11-19 10:55:27.712399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.712432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.712553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.712587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.712785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.712819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.713080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.713115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.713341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.713381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.713643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.713676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.713875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.713909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.714065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.714100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.714234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.714267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.714455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.714489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.714765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.714799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.714933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.714976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.715167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.715201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.715339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.715372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.715600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.715633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.715908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.715942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.716084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.716117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.716357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.716392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.716595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.716630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.716897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.716930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.717093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.717127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.717300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.717333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.717548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.717582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.717792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.717826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.718013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.718048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.718258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.718291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.718429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.718462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.718679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.718713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.718993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.719028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.719266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.719300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.719509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.719543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.719674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.719707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.719979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.720015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.720264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.720298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.720434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.720468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.720782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.720817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.721096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.721132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.460 [2024-11-19 10:55:27.721351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.460 [2024-11-19 10:55:27.721385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.460 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.721596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.721629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.721855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.721889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.722174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.722210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.722410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.722444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.722677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.722711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.722975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.723010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.723204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.723239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c8ba0 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.723497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.723535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.723793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.723826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.724073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.724106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.724299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.724333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.724474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.724506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.724712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.724744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.724983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.725016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.725230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.725263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.725473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.725505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.725689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.725721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.726030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.726067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.726206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.726242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.726376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.726409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.726688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.726727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.726994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.727028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.727220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.727253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.727395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.727427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.727733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.727767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.727966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.728000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.728132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.728165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.728351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.728383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.728603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.728635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.728826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.728857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.729065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.729100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.729361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.729394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.729655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.729687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.729810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.729843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.730090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.730124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.730390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.730424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.730663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.730696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.461 qpair failed and we were unable to recover it. 00:28:20.461 [2024-11-19 10:55:27.730886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.461 [2024-11-19 10:55:27.730917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.731116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.731150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.462 [2024-11-19 10:55:27.731353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.731386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.731581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:20.462 [2024-11-19 10:55:27.731614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.731884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.731917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.462 [2024-11-19 10:55:27.732137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.732171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.732353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.462 [2024-11-19 10:55:27.732385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.732521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.732553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.732793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.732831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.733053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.733086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.733232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.733265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.733449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.733481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.733769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.733801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.733922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.733966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.734111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.734143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.734335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.734367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.734507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.734539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.734755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.734787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.735073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.735108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.735330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.735363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.735621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.735654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.735893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.735926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.736158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.736190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.736367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.736398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.736657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.736690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.736928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.736968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.737116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.737149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.737286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.737318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.737458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.737492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.737765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.737797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.738070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.738103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.462 [2024-11-19 10:55:27.738235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.462 [2024-11-19 10:55:27.738268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.462 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.738397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.738431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.738618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.738650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.738916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.738959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.739152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.739187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.739330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.739364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.739508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.739541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.739760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.739794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.739984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.740018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.740209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.740242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.740377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.740411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.740598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.740631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.740889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.740922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.741109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.741143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.741339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.741373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.741635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.741667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.741980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.742014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.742211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.742251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.742492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.742524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.742733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.742765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.743034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.743067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.743253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.743286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.743475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.743507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.743797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.743830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.744121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.744156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.744359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.744391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.744581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.744613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.744853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.744886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.745196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.745228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.745477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.745509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.745700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.745733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.746007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.746043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.746256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.746289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.746462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.746494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.746769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.746801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.746997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.747030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.747294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.747326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.463 [2024-11-19 10:55:27.747461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.463 [2024-11-19 10:55:27.747493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.463 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.747766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.747799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.748016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.748050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.748291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.748323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.748510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.748542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.748732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.748765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.749004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.749037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.749251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.749283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.749428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.749460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.749576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.749609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.749874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.749906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.750162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.750197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.750451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.750483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.750768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.750799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.751070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.751105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.751303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.751336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.751455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.751487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.751666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.751698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.751900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.751932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.752156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.752189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.752308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.752347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.752518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.752552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.752733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.752766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.752968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.753003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.753192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.753225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.753476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.753508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.753798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.753832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.754048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.754085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.754345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.754378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.754684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.754718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.754906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.754939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.755218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.755253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.755447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.755481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.755656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.755691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.755909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.755942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.756139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.756175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.756371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.756405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.464 [2024-11-19 10:55:27.756667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.464 [2024-11-19 10:55:27.756700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.464 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.756992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.757027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.757207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.757241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.757505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.757538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.757727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.757760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.758021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.758055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.758339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.758372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.758572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.758605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.758783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.758816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.758958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.758993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.759264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.759297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.759542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.759575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 Malloc0 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.759834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.759868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.760051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.760085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.760266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.760301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.465 [2024-11-19 10:55:27.760565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.760598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:20.465 [2024-11-19 10:55:27.760840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.760873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.465 [2024-11-19 10:55:27.761043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.761077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.761317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.465 [2024-11-19 10:55:27.761349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.761548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.761582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.761820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.761852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.762090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.762131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.762401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.762434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.762624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.762657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.762918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.762960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.763233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.763267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.763544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.763577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.763856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.763889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.764139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.764172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.764311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.764343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.764603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.764636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.764837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.764869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.765120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.765154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.765410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.765443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.465 qpair failed and we were unable to recover it. 00:28:20.465 [2024-11-19 10:55:27.765551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.465 [2024-11-19 10:55:27.765584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.765767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.765800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.766027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.766084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.766359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.766392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.766577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.766608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.766848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.766881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.767139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.767174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.767222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.466 [2024-11-19 10:55:27.767459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.767492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.767761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.767793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.768007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.768041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.768333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.768365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.768536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.768568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.768745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.768778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.769065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.769099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.769373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.769405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.769686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.769719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.769855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.769887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.770152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.770186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.770392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.770424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.770625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.770658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.770945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.770987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.771201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.771233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.771491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.771523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.771779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.771812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.771984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.772018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.772301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.772334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.772512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.772543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.772798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.772836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.466 qpair failed and we were unable to recover it. 00:28:20.466 [2024-11-19 10:55:27.773074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.466 [2024-11-19 10:55:27.773108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.773301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.773334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.773455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.773487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.773749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.773781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.774062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.774097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.774371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.774403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.774688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.774720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.774921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.774963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.775206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.775239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.775506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.775538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.467 [2024-11-19 10:55:27.775823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.775855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.776129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.776165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.776433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.776467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.467 [2024-11-19 10:55:27.776641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.776674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.467 [2024-11-19 10:55:27.776936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.776977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd4000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.777196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.777253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.777562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.777598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.777842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.777876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.778050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.778086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.778345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.778378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.778616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.778649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.778911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.778945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.779222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.779256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.779534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.779567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.779786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.779819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.780084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.780118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.780314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.780347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.780521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.780555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.780840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.780873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.467 [2024-11-19 10:55:27.781112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.467 [2024-11-19 10:55:27.781146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.467 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.781389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.781422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.781707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.781740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.782005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.782040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.782229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.782262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.782525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.782558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.782797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.782829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.783019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.783054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.468 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:20.468 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.468 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.468 [2024-11-19 10:55:27.785065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.785115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.785350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.785386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.785677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.785710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.785916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.785959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.786179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.786212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.786451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.786483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.786749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.786781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.787065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.787099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.787372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.787405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.787595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.787628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.787760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.787793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.787967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.788002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.788236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.788269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.788439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.788471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.788739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.788771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.789039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.789073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.789336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.789369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.789657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.789689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.789899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.789932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.790117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.790151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.790418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.790450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.790586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.790619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.790818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.790850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.791093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.791127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.468 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.468 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.468 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.468 [2024-11-19 10:55:27.793101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.793151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.793464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.793501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.468 [2024-11-19 10:55:27.793769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.468 [2024-11-19 10:55:27.793802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.468 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.794061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.469 [2024-11-19 10:55:27.794095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.794305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.469 [2024-11-19 10:55:27.794338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.794534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.469 [2024-11-19 10:55:27.794567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.794761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.469 [2024-11-19 10:55:27.794795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.794924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.469 [2024-11-19 10:55:27.794968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.795159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.469 [2024-11-19 10:55:27.795191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6dd8000b90 with addr=10.0.0.2, port=4420 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.795755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.469 [2024-11-19 10:55:27.797932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.469 [2024-11-19 10:55:27.798095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.469 [2024-11-19 10:55:27.798140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.469 [2024-11-19 10:55:27.798164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.469 [2024-11-19 10:55:27.798187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.469 [2024-11-19 10:55:27.798239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.469 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:20.469 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.469 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.469 [2024-11-19 10:55:27.807864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.469 [2024-11-19 10:55:27.807972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.469 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.469 [2024-11-19 10:55:27.808004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.469 [2024-11-19 10:55:27.808021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.469 [2024-11-19 10:55:27.808042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.469 [2024-11-19 10:55:27.808080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 10:55:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1850950 00:28:20.469 [2024-11-19 10:55:27.817814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.469 [2024-11-19 10:55:27.817881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.469 [2024-11-19 10:55:27.817902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.469 [2024-11-19 10:55:27.817914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.469 [2024-11-19 10:55:27.817923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.469 [2024-11-19 10:55:27.817951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.827812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.469 [2024-11-19 10:55:27.827876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.469 [2024-11-19 10:55:27.827891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.469 [2024-11-19 10:55:27.827900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.469 [2024-11-19 10:55:27.827907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.469 [2024-11-19 10:55:27.827924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.837800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.469 [2024-11-19 10:55:27.837856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.469 [2024-11-19 10:55:27.837870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.469 [2024-11-19 10:55:27.837880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.469 [2024-11-19 10:55:27.837887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.469 [2024-11-19 10:55:27.837903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.847812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.469 [2024-11-19 10:55:27.847868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.469 [2024-11-19 10:55:27.847882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.469 [2024-11-19 10:55:27.847889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.469 [2024-11-19 10:55:27.847895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.469 [2024-11-19 10:55:27.847911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.469 [2024-11-19 10:55:27.857835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.469 [2024-11-19 10:55:27.857895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.469 [2024-11-19 10:55:27.857909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.469 [2024-11-19 10:55:27.857917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.469 [2024-11-19 10:55:27.857925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.469 [2024-11-19 10:55:27.857941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.469 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.867868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.867925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.867940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.867952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.867959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.867976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.877924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.878014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.878029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.878037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.878044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.878063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.887935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.887994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.888008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.888015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.888022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.888038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.897993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.898049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.898063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.898071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.898079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.898094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.907995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.908060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.908073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.908081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.908088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.908103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.918009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.918066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.918080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.918087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.918095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.918110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.928044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.928102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.928115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.928123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.928130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.928145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.938067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.938129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.938143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.938150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.938157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.938172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.948103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.948158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.948171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.948178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.948185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.948200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.958123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.958180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.958194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.958201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.958208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.958223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.968146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.968197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.968214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.731 [2024-11-19 10:55:27.968221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.731 [2024-11-19 10:55:27.968228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.731 [2024-11-19 10:55:27.968243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.731 qpair failed and we were unable to recover it. 00:28:20.731 [2024-11-19 10:55:27.978183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.731 [2024-11-19 10:55:27.978238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.731 [2024-11-19 10:55:27.978251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:27.978258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:27.978265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:27.978279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:27.988219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:27.988279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:27.988292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:27.988300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:27.988307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:27.988322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:27.998249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:27.998303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:27.998317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:27.998323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:27.998330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:27.998345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.008262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.008316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.008329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.008336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.008342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.008360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.018287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.018338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.018351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.018358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.018365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.018381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.028332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.028391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.028405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.028412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.028419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.028434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.038358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.038414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.038428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.038435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.038441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.038456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.048421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.048504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.048518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.048525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.048532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.048546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.058423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.058476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.058489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.058496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.058503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.058518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.068457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.068514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.068530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.068538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.068545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.068561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.078473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.078565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.078580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.078587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.078594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.078609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.088520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.088575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.088589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.088596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.088602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.088617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.098514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.732 [2024-11-19 10:55:28.098569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.732 [2024-11-19 10:55:28.098585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.732 [2024-11-19 10:55:28.098593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.732 [2024-11-19 10:55:28.098599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.732 [2024-11-19 10:55:28.098614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.732 qpair failed and we were unable to recover it. 00:28:20.732 [2024-11-19 10:55:28.108471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.733 [2024-11-19 10:55:28.108551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.733 [2024-11-19 10:55:28.108565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.733 [2024-11-19 10:55:28.108572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.733 [2024-11-19 10:55:28.108579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.733 [2024-11-19 10:55:28.108594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:55:28.118580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.733 [2024-11-19 10:55:28.118633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.733 [2024-11-19 10:55:28.118647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.733 [2024-11-19 10:55:28.118654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.733 [2024-11-19 10:55:28.118660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.733 [2024-11-19 10:55:28.118675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:55:28.128608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.733 [2024-11-19 10:55:28.128663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.733 [2024-11-19 10:55:28.128676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.733 [2024-11-19 10:55:28.128683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.733 [2024-11-19 10:55:28.128689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.733 [2024-11-19 10:55:28.128704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:55:28.138630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.733 [2024-11-19 10:55:28.138681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.733 [2024-11-19 10:55:28.138696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.733 [2024-11-19 10:55:28.138704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.733 [2024-11-19 10:55:28.138714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.733 [2024-11-19 10:55:28.138729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:55:28.148677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.733 [2024-11-19 10:55:28.148739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.733 [2024-11-19 10:55:28.148752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.733 [2024-11-19 10:55:28.148760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.733 [2024-11-19 10:55:28.148766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.733 [2024-11-19 10:55:28.148782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:55:28.158690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.733 [2024-11-19 10:55:28.158745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.733 [2024-11-19 10:55:28.158759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.733 [2024-11-19 10:55:28.158766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.733 [2024-11-19 10:55:28.158773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.733 [2024-11-19 10:55:28.158788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.733 [2024-11-19 10:55:28.168717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.733 [2024-11-19 10:55:28.168795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.733 [2024-11-19 10:55:28.168808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.733 [2024-11-19 10:55:28.168816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.733 [2024-11-19 10:55:28.168822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.733 [2024-11-19 10:55:28.168837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.733 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.178756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.178817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.994 [2024-11-19 10:55:28.178830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.994 [2024-11-19 10:55:28.178838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.994 [2024-11-19 10:55:28.178844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.994 [2024-11-19 10:55:28.178859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.994 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.188792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.188868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.994 [2024-11-19 10:55:28.188882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.994 [2024-11-19 10:55:28.188889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.994 [2024-11-19 10:55:28.188895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.994 [2024-11-19 10:55:28.188911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.994 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.198812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.198871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.994 [2024-11-19 10:55:28.198884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.994 [2024-11-19 10:55:28.198892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.994 [2024-11-19 10:55:28.198899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.994 [2024-11-19 10:55:28.198914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.994 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.208858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.208914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.994 [2024-11-19 10:55:28.208927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.994 [2024-11-19 10:55:28.208935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.994 [2024-11-19 10:55:28.208942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.994 [2024-11-19 10:55:28.208961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.994 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.218866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.218920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.994 [2024-11-19 10:55:28.218933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.994 [2024-11-19 10:55:28.218941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.994 [2024-11-19 10:55:28.218951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.994 [2024-11-19 10:55:28.218967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.994 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.228944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.229020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.994 [2024-11-19 10:55:28.229036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.994 [2024-11-19 10:55:28.229043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.994 [2024-11-19 10:55:28.229049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.994 [2024-11-19 10:55:28.229065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.994 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.238937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.239006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.994 [2024-11-19 10:55:28.239020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.994 [2024-11-19 10:55:28.239027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.994 [2024-11-19 10:55:28.239032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.994 [2024-11-19 10:55:28.239048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.994 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.249005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.249114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.994 [2024-11-19 10:55:28.249128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.994 [2024-11-19 10:55:28.249137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.994 [2024-11-19 10:55:28.249145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.994 [2024-11-19 10:55:28.249161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.994 qpair failed and we were unable to recover it. 00:28:20.994 [2024-11-19 10:55:28.258922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.994 [2024-11-19 10:55:28.258980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.258995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.259002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.259010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.259026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.268944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.269010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.269024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.269034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.269040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.269056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.278968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.279025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.279038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.279046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.279054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.279069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.289080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.289150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.289165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.289171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.289178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.289192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.299115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.299171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.299187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.299195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.299201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.299216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.309119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.309176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.309189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.309195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.309202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.309217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.319186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.319280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.319295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.319302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.319308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.319325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.329219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.329275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.329289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.329296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.329303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.329318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.339152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.339203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.339216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.339223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.339230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.339245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.349192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.349250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.349263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.349271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.349278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.349293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.359310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.359373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.359387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.359395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.359401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.359416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.369295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.369358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.369385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.369393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.369399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.369421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.379330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.995 [2024-11-19 10:55:28.379396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.995 [2024-11-19 10:55:28.379411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.995 [2024-11-19 10:55:28.379419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.995 [2024-11-19 10:55:28.379425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.995 [2024-11-19 10:55:28.379440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.995 qpair failed and we were unable to recover it. 00:28:20.995 [2024-11-19 10:55:28.389303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.996 [2024-11-19 10:55:28.389411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.996 [2024-11-19 10:55:28.389424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.996 [2024-11-19 10:55:28.389431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.996 [2024-11-19 10:55:28.389439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.996 [2024-11-19 10:55:28.389454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.996 qpair failed and we were unable to recover it. 00:28:20.996 [2024-11-19 10:55:28.399381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.996 [2024-11-19 10:55:28.399438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.996 [2024-11-19 10:55:28.399452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.996 [2024-11-19 10:55:28.399463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.996 [2024-11-19 10:55:28.399469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.996 [2024-11-19 10:55:28.399484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.996 qpair failed and we were unable to recover it. 00:28:20.996 [2024-11-19 10:55:28.409418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.996 [2024-11-19 10:55:28.409472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.996 [2024-11-19 10:55:28.409485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.996 [2024-11-19 10:55:28.409492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.996 [2024-11-19 10:55:28.409498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.996 [2024-11-19 10:55:28.409513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.996 qpair failed and we were unable to recover it. 00:28:20.996 [2024-11-19 10:55:28.419493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.996 [2024-11-19 10:55:28.419553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.996 [2024-11-19 10:55:28.419566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.996 [2024-11-19 10:55:28.419573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.996 [2024-11-19 10:55:28.419580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.996 [2024-11-19 10:55:28.419595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.996 qpair failed and we were unable to recover it. 00:28:20.996 [2024-11-19 10:55:28.429419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.996 [2024-11-19 10:55:28.429479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.996 [2024-11-19 10:55:28.429493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.996 [2024-11-19 10:55:28.429500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.996 [2024-11-19 10:55:28.429507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.996 [2024-11-19 10:55:28.429522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.996 qpair failed and we were unable to recover it. 00:28:20.996 [2024-11-19 10:55:28.439436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.996 [2024-11-19 10:55:28.439496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.996 [2024-11-19 10:55:28.439511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.996 [2024-11-19 10:55:28.439518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.996 [2024-11-19 10:55:28.439524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:20.996 [2024-11-19 10:55:28.439543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.996 qpair failed and we were unable to recover it. 00:28:21.257 [2024-11-19 10:55:28.449476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.257 [2024-11-19 10:55:28.449541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.257 [2024-11-19 10:55:28.449555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.257 [2024-11-19 10:55:28.449562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.257 [2024-11-19 10:55:28.449568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.257 [2024-11-19 10:55:28.449583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.257 qpair failed and we were unable to recover it. 00:28:21.257 [2024-11-19 10:55:28.459531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.257 [2024-11-19 10:55:28.459595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.257 [2024-11-19 10:55:28.459610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.257 [2024-11-19 10:55:28.459616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.257 [2024-11-19 10:55:28.459623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.257 [2024-11-19 10:55:28.459638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.257 qpair failed and we were unable to recover it. 00:28:21.257 [2024-11-19 10:55:28.469584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.257 [2024-11-19 10:55:28.469641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.257 [2024-11-19 10:55:28.469654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.257 [2024-11-19 10:55:28.469661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.257 [2024-11-19 10:55:28.469667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.257 [2024-11-19 10:55:28.469681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.257 qpair failed and we were unable to recover it. 00:28:21.257 [2024-11-19 10:55:28.479638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.257 [2024-11-19 10:55:28.479739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.257 [2024-11-19 10:55:28.479754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.257 [2024-11-19 10:55:28.479760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.257 [2024-11-19 10:55:28.479767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.257 [2024-11-19 10:55:28.479781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.257 qpair failed and we were unable to recover it. 00:28:21.257 [2024-11-19 10:55:28.489580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.257 [2024-11-19 10:55:28.489643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.257 [2024-11-19 10:55:28.489657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.257 [2024-11-19 10:55:28.489664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.257 [2024-11-19 10:55:28.489670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.257 [2024-11-19 10:55:28.489685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.257 qpair failed and we were unable to recover it. 00:28:21.257 [2024-11-19 10:55:28.499673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.257 [2024-11-19 10:55:28.499728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.257 [2024-11-19 10:55:28.499742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.257 [2024-11-19 10:55:28.499748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.257 [2024-11-19 10:55:28.499755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.257 [2024-11-19 10:55:28.499769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.257 qpair failed and we were unable to recover it. 00:28:21.257 [2024-11-19 10:55:28.509640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.257 [2024-11-19 10:55:28.509695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.257 [2024-11-19 10:55:28.509709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.257 [2024-11-19 10:55:28.509716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.257 [2024-11-19 10:55:28.509723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.257 [2024-11-19 10:55:28.509738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.257 qpair failed and we were unable to recover it. 00:28:21.257 [2024-11-19 10:55:28.519711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.257 [2024-11-19 10:55:28.519766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.257 [2024-11-19 10:55:28.519778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.519785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.519791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.519806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.529685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.529742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.529760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.529767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.529773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.529788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.539811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.539895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.539908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.539915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.539921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.539935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.549839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.549903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.549916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.549923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.549929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.549943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.559885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.559941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.559961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.559968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.559974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.559989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.569882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.569953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.569966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.569973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.569979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.569998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.579874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.579954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.579968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.579974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.579980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.579995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.589940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.590004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.590018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.590025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.590031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.590046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.600150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.600219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.600232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.600239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.600245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.600260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.609982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.610031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.610044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.610051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.610057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.610073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.620075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.620150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.620164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.620170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.620176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.620191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.630021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.630082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.630095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.630101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.630108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.630123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.640116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.640173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.640186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.258 [2024-11-19 10:55:28.640192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.258 [2024-11-19 10:55:28.640198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.258 [2024-11-19 10:55:28.640213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.258 qpair failed and we were unable to recover it. 00:28:21.258 [2024-11-19 10:55:28.650164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.258 [2024-11-19 10:55:28.650220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.258 [2024-11-19 10:55:28.650233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.259 [2024-11-19 10:55:28.650240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.259 [2024-11-19 10:55:28.650246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.259 [2024-11-19 10:55:28.650260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.259 qpair failed and we were unable to recover it. 00:28:21.259 [2024-11-19 10:55:28.660140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.259 [2024-11-19 10:55:28.660193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.259 [2024-11-19 10:55:28.660210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.259 [2024-11-19 10:55:28.660217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.259 [2024-11-19 10:55:28.660223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.259 [2024-11-19 10:55:28.660237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.259 qpair failed and we were unable to recover it. 00:28:21.259 [2024-11-19 10:55:28.670195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.259 [2024-11-19 10:55:28.670266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.259 [2024-11-19 10:55:28.670279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.259 [2024-11-19 10:55:28.670286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.259 [2024-11-19 10:55:28.670292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.259 [2024-11-19 10:55:28.670306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.259 qpair failed and we were unable to recover it. 00:28:21.259 [2024-11-19 10:55:28.680214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.259 [2024-11-19 10:55:28.680270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.259 [2024-11-19 10:55:28.680282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.259 [2024-11-19 10:55:28.680289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.259 [2024-11-19 10:55:28.680295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.259 [2024-11-19 10:55:28.680309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.259 qpair failed and we were unable to recover it. 00:28:21.259 [2024-11-19 10:55:28.690244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.259 [2024-11-19 10:55:28.690312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.259 [2024-11-19 10:55:28.690326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.259 [2024-11-19 10:55:28.690332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.259 [2024-11-19 10:55:28.690338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.259 [2024-11-19 10:55:28.690353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.259 qpair failed and we were unable to recover it. 00:28:21.259 [2024-11-19 10:55:28.700283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.259 [2024-11-19 10:55:28.700337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.259 [2024-11-19 10:55:28.700350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.259 [2024-11-19 10:55:28.700357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.259 [2024-11-19 10:55:28.700366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.259 [2024-11-19 10:55:28.700380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.259 qpair failed and we were unable to recover it. 00:28:21.520 [2024-11-19 10:55:28.710306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.520 [2024-11-19 10:55:28.710363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.520 [2024-11-19 10:55:28.710376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.520 [2024-11-19 10:55:28.710382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.520 [2024-11-19 10:55:28.710388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.520 [2024-11-19 10:55:28.710403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.520 qpair failed and we were unable to recover it. 00:28:21.520 [2024-11-19 10:55:28.720332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.520 [2024-11-19 10:55:28.720381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.520 [2024-11-19 10:55:28.720394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.520 [2024-11-19 10:55:28.720401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.520 [2024-11-19 10:55:28.720406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.520 [2024-11-19 10:55:28.720421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.520 qpair failed and we were unable to recover it. 00:28:21.520 [2024-11-19 10:55:28.730320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.520 [2024-11-19 10:55:28.730418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.520 [2024-11-19 10:55:28.730431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.730438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.730444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.730458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.740381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.740432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.740445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.740451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.740458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.740472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.750407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.750463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.750476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.750483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.750488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.750503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.760529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.760583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.760598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.760605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.760611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.760626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.770455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.770511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.770525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.770531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.770538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.770552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.780486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.780537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.780550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.780556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.780562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.780578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.790511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.790565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.790581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.790588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.790594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.790609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.800542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.800595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.800608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.800615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.800621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.800635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.810544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.810641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.810654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.810661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.810666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.810681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.820597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.820650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.820663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.820669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.820675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.820690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.830640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.830714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.830727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.830736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.830742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.830756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.840656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.840715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.840728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.840735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.840741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.840756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.850669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.850717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.850730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.521 [2024-11-19 10:55:28.850736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.521 [2024-11-19 10:55:28.850743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.521 [2024-11-19 10:55:28.850758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.521 qpair failed and we were unable to recover it. 00:28:21.521 [2024-11-19 10:55:28.860730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.521 [2024-11-19 10:55:28.860782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.521 [2024-11-19 10:55:28.860795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.860802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.860808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.860824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.870739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.870793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.870806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.870813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.870819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.870834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.880780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.880833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.880846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.880852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.880859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.880873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.890779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.890833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.890846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.890852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.890858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.890873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.900810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.900864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.900876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.900883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.900889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.900903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.910879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.910959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.910974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.910980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.910986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.911002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.920877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.920934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.920951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.920958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.920963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.920978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.930895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.930953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.930966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.930972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.930979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.930993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.940866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.940951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.940965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.940971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.940977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.940992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.950978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.951045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.951058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.951064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.951070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.951085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.522 [2024-11-19 10:55:28.960990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.522 [2024-11-19 10:55:28.961046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.522 [2024-11-19 10:55:28.961059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.522 [2024-11-19 10:55:28.961068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.522 [2024-11-19 10:55:28.961074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.522 [2024-11-19 10:55:28.961089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.522 qpair failed and we were unable to recover it. 00:28:21.783 [2024-11-19 10:55:28.970944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.783 [2024-11-19 10:55:28.971005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.783 [2024-11-19 10:55:28.971018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.783 [2024-11-19 10:55:28.971025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.783 [2024-11-19 10:55:28.971031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.783 [2024-11-19 10:55:28.971046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.783 qpair failed and we were unable to recover it. 00:28:21.783 [2024-11-19 10:55:28.981055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.783 [2024-11-19 10:55:28.981103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.783 [2024-11-19 10:55:28.981116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.783 [2024-11-19 10:55:28.981122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.783 [2024-11-19 10:55:28.981128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.783 [2024-11-19 10:55:28.981144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.783 qpair failed and we were unable to recover it. 00:28:21.783 [2024-11-19 10:55:28.991071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.783 [2024-11-19 10:55:28.991129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.783 [2024-11-19 10:55:28.991142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.783 [2024-11-19 10:55:28.991149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.783 [2024-11-19 10:55:28.991154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.783 [2024-11-19 10:55:28.991169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.783 qpair failed and we were unable to recover it. 00:28:21.783 [2024-11-19 10:55:29.001113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.783 [2024-11-19 10:55:29.001170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.783 [2024-11-19 10:55:29.001182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.783 [2024-11-19 10:55:29.001189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.783 [2024-11-19 10:55:29.001195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.783 [2024-11-19 10:55:29.001212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.783 qpair failed and we were unable to recover it. 00:28:21.783 [2024-11-19 10:55:29.011058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.783 [2024-11-19 10:55:29.011116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.783 [2024-11-19 10:55:29.011128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.783 [2024-11-19 10:55:29.011135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.783 [2024-11-19 10:55:29.011141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.783 [2024-11-19 10:55:29.011155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.783 qpair failed and we were unable to recover it. 00:28:21.783 [2024-11-19 10:55:29.021157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.783 [2024-11-19 10:55:29.021213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.783 [2024-11-19 10:55:29.021227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.783 [2024-11-19 10:55:29.021233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.783 [2024-11-19 10:55:29.021239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.783 [2024-11-19 10:55:29.021253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.783 qpair failed and we were unable to recover it. 00:28:21.783 [2024-11-19 10:55:29.031186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.031238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.031251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.031257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.031263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.031278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.041228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.041283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.041296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.041303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.041309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.041323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.051243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.051292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.051305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.051312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.051318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.051333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.061321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.061386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.061399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.061406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.061412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.061427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.071323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.071391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.071403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.071410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.071416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.071431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.081324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.081375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.081387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.081394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.081400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.081415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.091356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.091410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.091426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.091432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.091439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.091453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.101372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.101425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.101438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.101444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.101451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.101465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.111415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.111469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.111482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.111488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.111494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.111508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.121497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.121553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.121566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.121572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.121578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.121593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.131462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.131514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.131527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.131533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.131542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.131556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.141484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.141535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.141548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.141554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.141561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.141575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.784 [2024-11-19 10:55:29.151543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.784 [2024-11-19 10:55:29.151621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.784 [2024-11-19 10:55:29.151635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.784 [2024-11-19 10:55:29.151641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.784 [2024-11-19 10:55:29.151647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.784 [2024-11-19 10:55:29.151661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.784 qpair failed and we were unable to recover it. 00:28:21.785 [2024-11-19 10:55:29.161585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.785 [2024-11-19 10:55:29.161643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.785 [2024-11-19 10:55:29.161656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.785 [2024-11-19 10:55:29.161663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.785 [2024-11-19 10:55:29.161669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.785 [2024-11-19 10:55:29.161683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.785 qpair failed and we were unable to recover it. 00:28:21.785 [2024-11-19 10:55:29.171630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.785 [2024-11-19 10:55:29.171684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.785 [2024-11-19 10:55:29.171696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.785 [2024-11-19 10:55:29.171703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.785 [2024-11-19 10:55:29.171709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.785 [2024-11-19 10:55:29.171724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.785 qpair failed and we were unable to recover it. 00:28:21.785 [2024-11-19 10:55:29.181614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.785 [2024-11-19 10:55:29.181663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.785 [2024-11-19 10:55:29.181675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.785 [2024-11-19 10:55:29.181682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.785 [2024-11-19 10:55:29.181687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.785 [2024-11-19 10:55:29.181702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.785 qpair failed and we were unable to recover it. 00:28:21.785 [2024-11-19 10:55:29.191572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.785 [2024-11-19 10:55:29.191627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.785 [2024-11-19 10:55:29.191640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.785 [2024-11-19 10:55:29.191647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.785 [2024-11-19 10:55:29.191652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.785 [2024-11-19 10:55:29.191667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.785 qpair failed and we were unable to recover it. 00:28:21.785 [2024-11-19 10:55:29.201693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.785 [2024-11-19 10:55:29.201746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.785 [2024-11-19 10:55:29.201759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.785 [2024-11-19 10:55:29.201765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.785 [2024-11-19 10:55:29.201771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.785 [2024-11-19 10:55:29.201786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.785 qpair failed and we were unable to recover it. 00:28:21.785 [2024-11-19 10:55:29.211664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.785 [2024-11-19 10:55:29.211749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.785 [2024-11-19 10:55:29.211761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.785 [2024-11-19 10:55:29.211768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.785 [2024-11-19 10:55:29.211774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.785 [2024-11-19 10:55:29.211788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.785 qpair failed and we were unable to recover it. 00:28:21.785 [2024-11-19 10:55:29.221714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.785 [2024-11-19 10:55:29.221762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.785 [2024-11-19 10:55:29.221778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.785 [2024-11-19 10:55:29.221785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.785 [2024-11-19 10:55:29.221791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:21.785 [2024-11-19 10:55:29.221805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.785 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.231748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.231817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.231831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.044 [2024-11-19 10:55:29.231838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.044 [2024-11-19 10:55:29.231844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.044 [2024-11-19 10:55:29.231858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.044 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.241781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.241834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.241847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.044 [2024-11-19 10:55:29.241853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.044 [2024-11-19 10:55:29.241859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.044 [2024-11-19 10:55:29.241873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.044 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.251819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.251868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.251881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.044 [2024-11-19 10:55:29.251887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.044 [2024-11-19 10:55:29.251893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.044 [2024-11-19 10:55:29.251908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.044 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.261862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.261919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.261932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.044 [2024-11-19 10:55:29.261939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.044 [2024-11-19 10:55:29.261951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.044 [2024-11-19 10:55:29.261967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.044 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.271870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.271923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.271936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.044 [2024-11-19 10:55:29.271943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.044 [2024-11-19 10:55:29.271952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.044 [2024-11-19 10:55:29.271968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.044 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.281891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.281950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.281963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.044 [2024-11-19 10:55:29.281971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.044 [2024-11-19 10:55:29.281977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.044 [2024-11-19 10:55:29.281991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.044 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.291959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.292015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.292028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.044 [2024-11-19 10:55:29.292034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.044 [2024-11-19 10:55:29.292040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.044 [2024-11-19 10:55:29.292055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.044 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.301951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.302006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.302018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.044 [2024-11-19 10:55:29.302025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.044 [2024-11-19 10:55:29.302030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.044 [2024-11-19 10:55:29.302045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.044 qpair failed and we were unable to recover it. 00:28:22.044 [2024-11-19 10:55:29.311993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.044 [2024-11-19 10:55:29.312046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.044 [2024-11-19 10:55:29.312059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.312065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.312071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.312085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.322038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.322094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.322107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.322114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.322120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.322134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.332035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.332091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.332103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.332110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.332116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.332130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.342058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.342114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.342127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.342133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.342139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.342153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.352100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.352157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.352174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.352181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.352187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.352202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.362131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.362186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.362198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.362205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.362211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.362225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.372129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.372198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.372211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.372218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.372224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.372238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.382192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.382245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.382259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.382265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.382271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.382286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.392260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.392315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.392328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.392338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.392344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.392359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.402244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.402299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.402312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.402319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.402325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.402340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.412246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.412300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.412314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.412320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.412326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.412340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.422292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.422339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.422351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.422358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.422364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.422379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.432340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.432408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.432422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.045 [2024-11-19 10:55:29.432429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.045 [2024-11-19 10:55:29.432436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.045 [2024-11-19 10:55:29.432451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.045 qpair failed and we were unable to recover it. 00:28:22.045 [2024-11-19 10:55:29.442350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.045 [2024-11-19 10:55:29.442445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.045 [2024-11-19 10:55:29.442459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.046 [2024-11-19 10:55:29.442465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.046 [2024-11-19 10:55:29.442471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.046 [2024-11-19 10:55:29.442486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.046 qpair failed and we were unable to recover it. 00:28:22.046 [2024-11-19 10:55:29.452393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.046 [2024-11-19 10:55:29.452449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.046 [2024-11-19 10:55:29.452462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.046 [2024-11-19 10:55:29.452468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.046 [2024-11-19 10:55:29.452474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.046 [2024-11-19 10:55:29.452489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.046 qpair failed and we were unable to recover it. 00:28:22.046 [2024-11-19 10:55:29.462415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.046 [2024-11-19 10:55:29.462468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.046 [2024-11-19 10:55:29.462481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.046 [2024-11-19 10:55:29.462487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.046 [2024-11-19 10:55:29.462494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.046 [2024-11-19 10:55:29.462508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.046 qpair failed and we were unable to recover it. 00:28:22.046 [2024-11-19 10:55:29.472438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.046 [2024-11-19 10:55:29.472494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.046 [2024-11-19 10:55:29.472508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.046 [2024-11-19 10:55:29.472514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.046 [2024-11-19 10:55:29.472520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.046 [2024-11-19 10:55:29.472534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.046 qpair failed and we were unable to recover it. 00:28:22.046 [2024-11-19 10:55:29.482479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.046 [2024-11-19 10:55:29.482533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.046 [2024-11-19 10:55:29.482548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.046 [2024-11-19 10:55:29.482555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.046 [2024-11-19 10:55:29.482561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.046 [2024-11-19 10:55:29.482575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.046 qpair failed and we were unable to recover it. 00:28:22.046 [2024-11-19 10:55:29.492508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.306 [2024-11-19 10:55:29.492561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.306 [2024-11-19 10:55:29.492574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.306 [2024-11-19 10:55:29.492580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.306 [2024-11-19 10:55:29.492586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.306 [2024-11-19 10:55:29.492601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.306 qpair failed and we were unable to recover it. 00:28:22.306 [2024-11-19 10:55:29.502530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.306 [2024-11-19 10:55:29.502586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.306 [2024-11-19 10:55:29.502599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.306 [2024-11-19 10:55:29.502605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.306 [2024-11-19 10:55:29.502611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.306 [2024-11-19 10:55:29.502625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.306 qpair failed and we were unable to recover it. 00:28:22.306 [2024-11-19 10:55:29.512564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.306 [2024-11-19 10:55:29.512617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.306 [2024-11-19 10:55:29.512630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.306 [2024-11-19 10:55:29.512636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.306 [2024-11-19 10:55:29.512642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.306 [2024-11-19 10:55:29.512657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.306 qpair failed and we were unable to recover it. 00:28:22.306 [2024-11-19 10:55:29.522528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.306 [2024-11-19 10:55:29.522586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.306 [2024-11-19 10:55:29.522599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.306 [2024-11-19 10:55:29.522609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.306 [2024-11-19 10:55:29.522615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.306 [2024-11-19 10:55:29.522629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.306 qpair failed and we were unable to recover it. 00:28:22.306 [2024-11-19 10:55:29.532632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.306 [2024-11-19 10:55:29.532679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.306 [2024-11-19 10:55:29.532691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.306 [2024-11-19 10:55:29.532697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.306 [2024-11-19 10:55:29.532703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.306 [2024-11-19 10:55:29.532718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.306 qpair failed and we were unable to recover it. 00:28:22.306 [2024-11-19 10:55:29.542677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.306 [2024-11-19 10:55:29.542736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.542749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.542755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.542761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.542776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.552705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.552785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.552798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.552805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.552811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.552826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.562707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.562760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.562774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.562780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.562786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.562804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.572723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.572774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.572787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.572794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.572800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.572815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.582766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.582819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.582832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.582839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.582845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.582859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.592793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.592847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.592861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.592867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.592874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.592888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.602753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.602823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.602836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.602843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.602849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.602863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.612867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.612914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.612928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.612935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.612941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.612959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.622881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.622936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.622952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.622959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.622965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.622980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.632893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.632955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.632968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.632975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.632981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.632996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.642955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.643013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.643027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.643034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.643040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.643054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.652962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.653018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.653035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.653042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.653048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.653063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.662987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.307 [2024-11-19 10:55:29.663042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.307 [2024-11-19 10:55:29.663055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.307 [2024-11-19 10:55:29.663062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.307 [2024-11-19 10:55:29.663068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.307 [2024-11-19 10:55:29.663083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.307 qpair failed and we were unable to recover it. 00:28:22.307 [2024-11-19 10:55:29.672952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.673010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.673023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.673030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.673036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.673050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.308 [2024-11-19 10:55:29.683109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.683173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.683186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.683193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.683199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.683214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.308 [2024-11-19 10:55:29.693077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.693132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.693145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.693151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.693161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.693175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.308 [2024-11-19 10:55:29.703110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.703164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.703177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.703183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.703189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.703204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.308 [2024-11-19 10:55:29.713146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.713200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.713213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.713220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.713225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.713240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.308 [2024-11-19 10:55:29.723180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.723236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.723249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.723255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.723261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.723275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.308 [2024-11-19 10:55:29.733131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.733186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.733199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.733205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.733212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.733227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.308 [2024-11-19 10:55:29.743157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.743208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.743221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.743228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.743234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.743248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.308 [2024-11-19 10:55:29.753210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.308 [2024-11-19 10:55:29.753265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.308 [2024-11-19 10:55:29.753278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.308 [2024-11-19 10:55:29.753284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.308 [2024-11-19 10:55:29.753290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.308 [2024-11-19 10:55:29.753304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.308 qpair failed and we were unable to recover it. 00:28:22.570 [2024-11-19 10:55:29.763265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.570 [2024-11-19 10:55:29.763320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.570 [2024-11-19 10:55:29.763332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.570 [2024-11-19 10:55:29.763339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.570 [2024-11-19 10:55:29.763345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.570 [2024-11-19 10:55:29.763359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.570 qpair failed and we were unable to recover it. 00:28:22.570 [2024-11-19 10:55:29.773253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.570 [2024-11-19 10:55:29.773309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.570 [2024-11-19 10:55:29.773322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.570 [2024-11-19 10:55:29.773329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.570 [2024-11-19 10:55:29.773335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.570 [2024-11-19 10:55:29.773349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.570 qpair failed and we were unable to recover it. 00:28:22.570 [2024-11-19 10:55:29.783317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.570 [2024-11-19 10:55:29.783369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.570 [2024-11-19 10:55:29.783385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.570 [2024-11-19 10:55:29.783391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.570 [2024-11-19 10:55:29.783397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.570 [2024-11-19 10:55:29.783411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.570 qpair failed and we were unable to recover it. 00:28:22.570 [2024-11-19 10:55:29.793305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.570 [2024-11-19 10:55:29.793362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.570 [2024-11-19 10:55:29.793376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.570 [2024-11-19 10:55:29.793382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.570 [2024-11-19 10:55:29.793388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.570 [2024-11-19 10:55:29.793402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.570 qpair failed and we were unable to recover it. 00:28:22.570 [2024-11-19 10:55:29.803395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.570 [2024-11-19 10:55:29.803449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.570 [2024-11-19 10:55:29.803462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.570 [2024-11-19 10:55:29.803468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.570 [2024-11-19 10:55:29.803475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.570 [2024-11-19 10:55:29.803489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.570 qpair failed and we were unable to recover it. 00:28:22.570 [2024-11-19 10:55:29.813357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.570 [2024-11-19 10:55:29.813410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.570 [2024-11-19 10:55:29.813424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.813430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.813436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.813451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.823444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.823501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.823514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.823521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.823530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.823544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.833485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.833541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.833554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.833560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.833567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.833582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.843560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.843620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.843632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.843639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.843645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.843659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.853527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.853583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.853596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.853602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.853608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.853622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.863497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.863549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.863562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.863568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.863574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.863588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.873599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.873657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.873670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.873677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.873683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.873697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.883658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.883721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.883734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.883741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.883747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.883762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.893664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.893718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.893732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.893739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.893745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.893759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.903672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.903729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.903742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.903749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.903755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.903769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.913720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.913773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.913789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.913796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.913802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.913815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.923743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.923795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.923809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.923815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.923821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.923835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.933794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.933887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.571 [2024-11-19 10:55:29.933901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.571 [2024-11-19 10:55:29.933908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.571 [2024-11-19 10:55:29.933914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.571 [2024-11-19 10:55:29.933928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.571 qpair failed and we were unable to recover it. 00:28:22.571 [2024-11-19 10:55:29.943816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.571 [2024-11-19 10:55:29.943871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.572 [2024-11-19 10:55:29.943885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.572 [2024-11-19 10:55:29.943892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.572 [2024-11-19 10:55:29.943898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.572 [2024-11-19 10:55:29.943912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.572 qpair failed and we were unable to recover it. 00:28:22.572 [2024-11-19 10:55:29.953832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.572 [2024-11-19 10:55:29.953892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.572 [2024-11-19 10:55:29.953906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.572 [2024-11-19 10:55:29.953915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.572 [2024-11-19 10:55:29.953921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.572 [2024-11-19 10:55:29.953936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.572 qpair failed and we were unable to recover it. 00:28:22.572 [2024-11-19 10:55:29.963860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.572 [2024-11-19 10:55:29.963915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.572 [2024-11-19 10:55:29.963929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.572 [2024-11-19 10:55:29.963936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.572 [2024-11-19 10:55:29.963942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.572 [2024-11-19 10:55:29.963960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.572 qpair failed and we were unable to recover it. 00:28:22.572 [2024-11-19 10:55:29.973909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.572 [2024-11-19 10:55:29.973983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.572 [2024-11-19 10:55:29.973996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.572 [2024-11-19 10:55:29.974003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.572 [2024-11-19 10:55:29.974009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.572 [2024-11-19 10:55:29.974024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.572 qpair failed and we were unable to recover it. 00:28:22.572 [2024-11-19 10:55:29.983908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.572 [2024-11-19 10:55:29.983959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.572 [2024-11-19 10:55:29.983973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.572 [2024-11-19 10:55:29.983979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.572 [2024-11-19 10:55:29.983985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.572 [2024-11-19 10:55:29.984000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.572 qpair failed and we were unable to recover it. 00:28:22.572 [2024-11-19 10:55:29.993955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.572 [2024-11-19 10:55:29.994012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.572 [2024-11-19 10:55:29.994025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.572 [2024-11-19 10:55:29.994031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.572 [2024-11-19 10:55:29.994037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.572 [2024-11-19 10:55:29.994052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.572 qpair failed and we were unable to recover it. 00:28:22.572 [2024-11-19 10:55:30.004006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.572 [2024-11-19 10:55:30.004066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.572 [2024-11-19 10:55:30.004079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.572 [2024-11-19 10:55:30.004086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.572 [2024-11-19 10:55:30.004092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.572 [2024-11-19 10:55:30.004106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.572 qpair failed and we were unable to recover it. 00:28:22.572 [2024-11-19 10:55:30.014129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.572 [2024-11-19 10:55:30.014281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.572 [2024-11-19 10:55:30.014419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.572 [2024-11-19 10:55:30.014469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.572 [2024-11-19 10:55:30.014489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.572 [2024-11-19 10:55:30.014561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.572 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.024096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.024160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.024176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.833 [2024-11-19 10:55:30.024184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.833 [2024-11-19 10:55:30.024190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.833 [2024-11-19 10:55:30.024206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.034086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.034147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.034161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.833 [2024-11-19 10:55:30.034168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.833 [2024-11-19 10:55:30.034174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.833 [2024-11-19 10:55:30.034190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.044178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.044278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.044296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.833 [2024-11-19 10:55:30.044305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.833 [2024-11-19 10:55:30.044312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.833 [2024-11-19 10:55:30.044329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.054122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.054183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.054198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.833 [2024-11-19 10:55:30.054205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.833 [2024-11-19 10:55:30.054211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.833 [2024-11-19 10:55:30.054227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.064162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.064218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.064233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.833 [2024-11-19 10:55:30.064240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.833 [2024-11-19 10:55:30.064247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.833 [2024-11-19 10:55:30.064262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.074185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.074247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.074261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.833 [2024-11-19 10:55:30.074268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.833 [2024-11-19 10:55:30.074275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.833 [2024-11-19 10:55:30.074289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.084150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.084210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.084224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.833 [2024-11-19 10:55:30.084235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.833 [2024-11-19 10:55:30.084241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.833 [2024-11-19 10:55:30.084256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.094189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.094249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.094265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.833 [2024-11-19 10:55:30.094273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.833 [2024-11-19 10:55:30.094279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.833 [2024-11-19 10:55:30.094295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.833 qpair failed and we were unable to recover it. 00:28:22.833 [2024-11-19 10:55:30.104291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.833 [2024-11-19 10:55:30.104350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.833 [2024-11-19 10:55:30.104363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.104370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.104376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.104390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.114311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.114373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.114387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.114394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.114400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.114415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.124356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.124413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.124426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.124434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.124440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.124458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.134408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.134459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.134472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.134479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.134485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.134500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.144383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.144439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.144452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.144459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.144465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.144480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.154412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.154469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.154483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.154489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.154496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.154511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.164444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.164501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.164516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.164523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.164529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.164544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.174434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.174492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.174505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.174511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.174518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.174533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.184493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.184545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.184558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.184564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.184570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.184584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.194537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.194592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.194605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.194611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.194618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.194632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.204566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.204624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.204637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.204644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.204650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.204665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.214577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.214633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.214649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.214656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.214662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.214677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.224606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.834 [2024-11-19 10:55:30.224663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.834 [2024-11-19 10:55:30.224676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.834 [2024-11-19 10:55:30.224682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.834 [2024-11-19 10:55:30.224688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.834 [2024-11-19 10:55:30.224703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.834 qpair failed and we were unable to recover it. 00:28:22.834 [2024-11-19 10:55:30.234655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.835 [2024-11-19 10:55:30.234718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.835 [2024-11-19 10:55:30.234732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.835 [2024-11-19 10:55:30.234738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.835 [2024-11-19 10:55:30.234744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.835 [2024-11-19 10:55:30.234759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-11-19 10:55:30.244673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.835 [2024-11-19 10:55:30.244728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.835 [2024-11-19 10:55:30.244741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.835 [2024-11-19 10:55:30.244748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.835 [2024-11-19 10:55:30.244754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.835 [2024-11-19 10:55:30.244768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-11-19 10:55:30.254695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.835 [2024-11-19 10:55:30.254746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.835 [2024-11-19 10:55:30.254759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.835 [2024-11-19 10:55:30.254766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.835 [2024-11-19 10:55:30.254775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.835 [2024-11-19 10:55:30.254790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-11-19 10:55:30.264736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.835 [2024-11-19 10:55:30.264791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.835 [2024-11-19 10:55:30.264804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.835 [2024-11-19 10:55:30.264811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.835 [2024-11-19 10:55:30.264817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.835 [2024-11-19 10:55:30.264831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.835 qpair failed and we were unable to recover it. 00:28:22.835 [2024-11-19 10:55:30.274742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.835 [2024-11-19 10:55:30.274838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.835 [2024-11-19 10:55:30.274851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.835 [2024-11-19 10:55:30.274857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.835 [2024-11-19 10:55:30.274863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:22.835 [2024-11-19 10:55:30.274878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.835 qpair failed and we were unable to recover it. 00:28:23.095 [2024-11-19 10:55:30.284816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.095 [2024-11-19 10:55:30.284885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.095 [2024-11-19 10:55:30.284899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.095 [2024-11-19 10:55:30.284905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.095 [2024-11-19 10:55:30.284912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.095 [2024-11-19 10:55:30.284926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.095 qpair failed and we were unable to recover it. 00:28:23.095 [2024-11-19 10:55:30.294810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.095 [2024-11-19 10:55:30.294859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.095 [2024-11-19 10:55:30.294871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.095 [2024-11-19 10:55:30.294878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.095 [2024-11-19 10:55:30.294884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.095 [2024-11-19 10:55:30.294898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.095 qpair failed and we were unable to recover it. 00:28:23.095 [2024-11-19 10:55:30.304850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.095 [2024-11-19 10:55:30.304913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.095 [2024-11-19 10:55:30.304926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.304932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.304939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.304959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.314878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.314934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.314951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.314959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.314965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.314980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.324940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.325049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.325062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.325069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.325074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.325089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.334930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.334988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.335002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.335008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.335014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.335028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.344965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.345022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.345039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.345046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.345052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.345066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.354989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.355092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.355105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.355112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.355118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.355133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.365030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.365086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.365099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.365106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.365112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.365127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.375055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.375108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.375122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.375128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.375134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.375149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.385007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.385067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.385080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.385087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.385096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.385112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.395128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.395184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.395197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.395204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.395209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.395225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.405150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.405204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.405218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.405225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.405230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.405245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.415167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.415222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.415235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.415242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.415248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.415262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.425130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.425182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.425195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.096 [2024-11-19 10:55:30.425201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.096 [2024-11-19 10:55:30.425207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.096 [2024-11-19 10:55:30.425221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.096 qpair failed and we were unable to recover it. 00:28:23.096 [2024-11-19 10:55:30.435229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.096 [2024-11-19 10:55:30.435288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.096 [2024-11-19 10:55:30.435303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.435309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.435316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.435330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.445251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.445302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.445316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.445323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.445329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.445343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.455218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.455268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.455281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.455287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.455293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.455307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.465298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.465351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.465364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.465371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.465377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.465391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.475333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.475388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.475404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.475410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.475416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.475430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.485282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.485341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.485355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.485362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.485369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.485383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.495403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.495454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.495467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.495473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.495480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.495493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.505399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.505453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.505466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.505472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.505479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.505493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.515449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.515507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.515520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.515529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.515535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.515551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.525474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.525542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.525555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.525562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.525568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.525583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.097 [2024-11-19 10:55:30.535515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.097 [2024-11-19 10:55:30.535576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.097 [2024-11-19 10:55:30.535589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.097 [2024-11-19 10:55:30.535596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.097 [2024-11-19 10:55:30.535602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.097 [2024-11-19 10:55:30.535616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.097 qpair failed and we were unable to recover it. 00:28:23.358 [2024-11-19 10:55:30.545512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.358 [2024-11-19 10:55:30.545568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.358 [2024-11-19 10:55:30.545581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.358 [2024-11-19 10:55:30.545588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.358 [2024-11-19 10:55:30.545594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.358 [2024-11-19 10:55:30.545608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.358 qpair failed and we were unable to recover it. 00:28:23.358 [2024-11-19 10:55:30.555538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.358 [2024-11-19 10:55:30.555599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.358 [2024-11-19 10:55:30.555613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.358 [2024-11-19 10:55:30.555620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.358 [2024-11-19 10:55:30.555626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.358 [2024-11-19 10:55:30.555645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.358 qpair failed and we were unable to recover it. 00:28:23.358 [2024-11-19 10:55:30.565542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.565593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.565606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.565613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.565619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.565633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.575610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.575668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.575682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.575688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.575694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.575708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.585667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.585730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.585743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.585749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.585755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.585770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.595664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.595719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.595733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.595740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.595746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.595760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.605700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.605766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.605780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.605787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.605793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.605807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.615713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.615790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.615803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.615810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.615815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.615830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.625736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.625791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.625803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.625810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.625816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.625831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.635709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.635763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.635776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.635783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.635789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.635803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.645725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.645782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.645795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.645805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.645811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.645826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.655820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.655876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.655889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.655896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.655902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.655917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.665856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.665909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.665923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.665930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.665936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.665956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.675886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.675941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.675960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.675967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.675974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.675989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.359 [2024-11-19 10:55:30.685929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.359 [2024-11-19 10:55:30.685987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.359 [2024-11-19 10:55:30.686000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.359 [2024-11-19 10:55:30.686006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.359 [2024-11-19 10:55:30.686012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.359 [2024-11-19 10:55:30.686030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.359 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.695895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.695952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.695965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.695972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.695978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.695993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.705956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.706008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.706021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.706027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.706033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.706048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.715980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.716035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.716048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.716055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.716061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.716075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.726011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.726069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.726082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.726089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.726096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.726111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.736042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.736098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.736111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.736118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.736124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.736139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.746076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.746133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.746146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.746153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.746159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.746174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.756112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.756169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.756182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.756189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.756195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.756209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.766174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.766229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.766242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.766248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.766254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.766268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.776172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.776231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.776246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.776253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.776259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.776274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.786242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.786293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.786307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.786313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.786319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.786334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.360 [2024-11-19 10:55:30.796233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.360 [2024-11-19 10:55:30.796292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.360 [2024-11-19 10:55:30.796304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.360 [2024-11-19 10:55:30.796311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.360 [2024-11-19 10:55:30.796317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.360 [2024-11-19 10:55:30.796332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.360 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.806318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.806375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.806387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.806394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.806400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.806414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.816280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.816331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.816344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.816351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.816360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.816375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.826356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.826460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.826472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.826479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.826485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.826499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.836330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.836385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.836398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.836405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.836411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.836425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.846395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.846448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.846461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.846468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.846474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.846488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.856389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.856446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.856458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.856465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.856471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.856486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.866430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.866480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.866493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.866500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.866506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.866520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.876450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.876504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.876517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.876524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.876530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.876544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.886520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.886570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.886583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.886590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.886596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.886610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.896504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.896558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.896570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.896577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.896583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.896597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.622 [2024-11-19 10:55:30.906536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.622 [2024-11-19 10:55:30.906591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.622 [2024-11-19 10:55:30.906606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.622 [2024-11-19 10:55:30.906614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.622 [2024-11-19 10:55:30.906619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.622 [2024-11-19 10:55:30.906634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.622 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.916559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.916613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.916626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.916632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.916639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.916653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.926525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.926580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.926592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.926599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.926605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.926619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.936618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.936722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.936735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.936741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.936748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.936762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.946696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.946751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.946764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.946771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.946780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.946795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.956686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.956748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.956761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.956768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.956774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.956789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.966705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.966757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.966771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.966778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.966783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.966799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.976726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.976781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.976794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.976801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.976807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.976821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.986767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.986846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.986859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.986866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.986872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.986887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:30.996781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:30.996835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:30.996849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:30.996855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:30.996861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:30.996876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:31.006809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:31.006859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:31.006873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:31.006880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:31.006886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:31.006901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:31.016889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:31.016940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:31.016956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:31.016962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:31.016968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:31.016983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:31.026865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:31.026923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:31.026936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:31.026943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:31.026953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:31.026968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.623 [2024-11-19 10:55:31.036919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.623 [2024-11-19 10:55:31.036979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.623 [2024-11-19 10:55:31.036994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.623 [2024-11-19 10:55:31.037001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.623 [2024-11-19 10:55:31.037007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.623 [2024-11-19 10:55:31.037022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.623 qpair failed and we were unable to recover it. 00:28:23.624 [2024-11-19 10:55:31.046927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.624 [2024-11-19 10:55:31.046982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.624 [2024-11-19 10:55:31.046995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.624 [2024-11-19 10:55:31.047002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.624 [2024-11-19 10:55:31.047008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.624 [2024-11-19 10:55:31.047022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.624 qpair failed and we were unable to recover it. 00:28:23.624 [2024-11-19 10:55:31.056962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.624 [2024-11-19 10:55:31.057015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.624 [2024-11-19 10:55:31.057028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.624 [2024-11-19 10:55:31.057035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.624 [2024-11-19 10:55:31.057041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.624 [2024-11-19 10:55:31.057055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.624 qpair failed and we were unable to recover it. 00:28:23.624 [2024-11-19 10:55:31.066987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.624 [2024-11-19 10:55:31.067058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.624 [2024-11-19 10:55:31.067071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.624 [2024-11-19 10:55:31.067077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.624 [2024-11-19 10:55:31.067083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.624 [2024-11-19 10:55:31.067098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.624 qpair failed and we were unable to recover it. 00:28:23.884 [2024-11-19 10:55:31.076958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.884 [2024-11-19 10:55:31.077036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.884 [2024-11-19 10:55:31.077050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.884 [2024-11-19 10:55:31.077060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.884 [2024-11-19 10:55:31.077065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.884 [2024-11-19 10:55:31.077080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.884 qpair failed and we were unable to recover it. 00:28:23.884 [2024-11-19 10:55:31.087085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.884 [2024-11-19 10:55:31.087140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.884 [2024-11-19 10:55:31.087154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.884 [2024-11-19 10:55:31.087160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.884 [2024-11-19 10:55:31.087166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.884 [2024-11-19 10:55:31.087181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.884 qpair failed and we were unable to recover it. 00:28:23.884 [2024-11-19 10:55:31.097092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.884 [2024-11-19 10:55:31.097156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.884 [2024-11-19 10:55:31.097170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.884 [2024-11-19 10:55:31.097177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.884 [2024-11-19 10:55:31.097183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.884 [2024-11-19 10:55:31.097198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.884 qpair failed and we were unable to recover it. 00:28:23.884 [2024-11-19 10:55:31.107108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.884 [2024-11-19 10:55:31.107159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.884 [2024-11-19 10:55:31.107172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.884 [2024-11-19 10:55:31.107179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.884 [2024-11-19 10:55:31.107185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.884 [2024-11-19 10:55:31.107199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.884 qpair failed and we were unable to recover it. 00:28:23.884 [2024-11-19 10:55:31.117162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.884 [2024-11-19 10:55:31.117217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.884 [2024-11-19 10:55:31.117231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.884 [2024-11-19 10:55:31.117237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.884 [2024-11-19 10:55:31.117243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.884 [2024-11-19 10:55:31.117261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.884 qpair failed and we were unable to recover it. 00:28:23.884 [2024-11-19 10:55:31.127172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.884 [2024-11-19 10:55:31.127228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.884 [2024-11-19 10:55:31.127241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.884 [2024-11-19 10:55:31.127248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.884 [2024-11-19 10:55:31.127254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.884 [2024-11-19 10:55:31.127268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.884 qpair failed and we were unable to recover it. 00:28:23.884 [2024-11-19 10:55:31.137206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.884 [2024-11-19 10:55:31.137258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.884 [2024-11-19 10:55:31.137272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.884 [2024-11-19 10:55:31.137278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.884 [2024-11-19 10:55:31.137284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.137299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.147222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.147274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.147286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.147293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.147299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.147313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.157240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.157296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.157310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.157317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.157323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.157337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.167277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.167332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.167346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.167352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.167358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.167373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.177338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.177396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.177408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.177415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.177421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.177435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.187305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.187391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.187404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.187411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.187416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.187430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.197304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.197358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.197371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.197377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.197383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.197398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.207324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.207379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.207392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.207403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.207409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.207424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.217357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.217411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.217424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.217431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.217437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.217451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.227391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.227450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.227463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.227469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.227475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.227490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.237409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.237466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.237479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.237485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.237492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.237506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.247525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.247581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.247595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.247602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.247608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.247625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.257567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.257621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.257634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.257641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.257647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.885 [2024-11-19 10:55:31.257661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.885 qpair failed and we were unable to recover it. 00:28:23.885 [2024-11-19 10:55:31.267519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.885 [2024-11-19 10:55:31.267575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.885 [2024-11-19 10:55:31.267589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.885 [2024-11-19 10:55:31.267596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.885 [2024-11-19 10:55:31.267602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.886 [2024-11-19 10:55:31.267617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.886 qpair failed and we were unable to recover it. 00:28:23.886 [2024-11-19 10:55:31.277612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.886 [2024-11-19 10:55:31.277664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.886 [2024-11-19 10:55:31.277677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.886 [2024-11-19 10:55:31.277683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.886 [2024-11-19 10:55:31.277689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.886 [2024-11-19 10:55:31.277703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.886 qpair failed and we were unable to recover it. 00:28:23.886 [2024-11-19 10:55:31.287654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.886 [2024-11-19 10:55:31.287709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.886 [2024-11-19 10:55:31.287722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.886 [2024-11-19 10:55:31.287728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.886 [2024-11-19 10:55:31.287734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.886 [2024-11-19 10:55:31.287749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.886 qpair failed and we were unable to recover it. 00:28:23.886 [2024-11-19 10:55:31.297655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.886 [2024-11-19 10:55:31.297716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.886 [2024-11-19 10:55:31.297729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.886 [2024-11-19 10:55:31.297735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.886 [2024-11-19 10:55:31.297741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.886 [2024-11-19 10:55:31.297755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.886 qpair failed and we were unable to recover it. 00:28:23.886 [2024-11-19 10:55:31.307624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.886 [2024-11-19 10:55:31.307677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.886 [2024-11-19 10:55:31.307689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.886 [2024-11-19 10:55:31.307695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.886 [2024-11-19 10:55:31.307701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.886 [2024-11-19 10:55:31.307716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.886 qpair failed and we were unable to recover it. 00:28:23.886 [2024-11-19 10:55:31.317699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.886 [2024-11-19 10:55:31.317774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.886 [2024-11-19 10:55:31.317787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.886 [2024-11-19 10:55:31.317793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.886 [2024-11-19 10:55:31.317799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.886 [2024-11-19 10:55:31.317814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.886 qpair failed and we were unable to recover it. 00:28:23.886 [2024-11-19 10:55:31.327724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:23.886 [2024-11-19 10:55:31.327795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:23.886 [2024-11-19 10:55:31.327808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:23.886 [2024-11-19 10:55:31.327814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:23.886 [2024-11-19 10:55:31.327820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:23.886 [2024-11-19 10:55:31.327834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.886 qpair failed and we were unable to recover it. 00:28:24.147 [2024-11-19 10:55:31.337772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.147 [2024-11-19 10:55:31.337826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.147 [2024-11-19 10:55:31.337841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.147 [2024-11-19 10:55:31.337848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.147 [2024-11-19 10:55:31.337854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.147 [2024-11-19 10:55:31.337868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.147 qpair failed and we were unable to recover it. 00:28:24.147 [2024-11-19 10:55:31.347743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.147 [2024-11-19 10:55:31.347796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.147 [2024-11-19 10:55:31.347810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.147 [2024-11-19 10:55:31.347816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.147 [2024-11-19 10:55:31.347822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.147 [2024-11-19 10:55:31.347837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.147 qpair failed and we were unable to recover it. 00:28:24.147 [2024-11-19 10:55:31.357848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.147 [2024-11-19 10:55:31.357907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.147 [2024-11-19 10:55:31.357919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.147 [2024-11-19 10:55:31.357926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.147 [2024-11-19 10:55:31.357932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.147 [2024-11-19 10:55:31.357950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.147 qpair failed and we were unable to recover it. 00:28:24.147 [2024-11-19 10:55:31.367865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.147 [2024-11-19 10:55:31.367950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.147 [2024-11-19 10:55:31.367963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.147 [2024-11-19 10:55:31.367970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.147 [2024-11-19 10:55:31.367976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.147 [2024-11-19 10:55:31.367989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.147 qpair failed and we were unable to recover it. 00:28:24.147 [2024-11-19 10:55:31.377896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.147 [2024-11-19 10:55:31.377981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.147 [2024-11-19 10:55:31.377994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.147 [2024-11-19 10:55:31.378000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.147 [2024-11-19 10:55:31.378013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.147 [2024-11-19 10:55:31.378028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.147 qpair failed and we were unable to recover it. 00:28:24.147 [2024-11-19 10:55:31.387880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.147 [2024-11-19 10:55:31.387929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.147 [2024-11-19 10:55:31.387942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.147 [2024-11-19 10:55:31.387953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.147 [2024-11-19 10:55:31.387959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.147 [2024-11-19 10:55:31.387974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.147 qpair failed and we were unable to recover it. 00:28:24.147 [2024-11-19 10:55:31.397977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.147 [2024-11-19 10:55:31.398031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.147 [2024-11-19 10:55:31.398044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.147 [2024-11-19 10:55:31.398051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.147 [2024-11-19 10:55:31.398057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.147 [2024-11-19 10:55:31.398071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.147 qpair failed and we were unable to recover it. 00:28:24.147 [2024-11-19 10:55:31.407983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.147 [2024-11-19 10:55:31.408034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.147 [2024-11-19 10:55:31.408047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.147 [2024-11-19 10:55:31.408054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.147 [2024-11-19 10:55:31.408060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.147 [2024-11-19 10:55:31.408076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.417960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.418016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.418029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.418036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.418042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.418057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.428053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.428127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.428140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.428148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.428154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.428167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.438022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.438084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.438098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.438105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.438111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.438126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.448098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.448154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.448168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.448174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.448180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.448194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.458176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.458232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.458245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.458252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.458258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.458272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.468152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.468244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.468260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.468266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.468272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.468287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.478218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.478273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.478286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.478293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.478298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.478313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.488233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.488286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.488301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.488307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.488314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.488328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.498172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.498237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.498250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.498257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.498263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.498277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.508210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.508297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.508311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.508317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.508326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.508340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.518310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.518369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.518383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.518389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.518395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.518410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.528340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.528395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.528408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.528414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.528421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.148 [2024-11-19 10:55:31.528436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.148 qpair failed and we were unable to recover it. 00:28:24.148 [2024-11-19 10:55:31.538331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.148 [2024-11-19 10:55:31.538381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.148 [2024-11-19 10:55:31.538393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.148 [2024-11-19 10:55:31.538400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.148 [2024-11-19 10:55:31.538406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.149 [2024-11-19 10:55:31.538421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.149 qpair failed and we were unable to recover it. 00:28:24.149 [2024-11-19 10:55:31.548385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.149 [2024-11-19 10:55:31.548435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.149 [2024-11-19 10:55:31.548447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.149 [2024-11-19 10:55:31.548454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.149 [2024-11-19 10:55:31.548460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.149 [2024-11-19 10:55:31.548475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.149 qpair failed and we were unable to recover it. 00:28:24.149 [2024-11-19 10:55:31.558420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.149 [2024-11-19 10:55:31.558471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.149 [2024-11-19 10:55:31.558484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.149 [2024-11-19 10:55:31.558491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.149 [2024-11-19 10:55:31.558497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.149 [2024-11-19 10:55:31.558512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.149 qpair failed and we were unable to recover it. 00:28:24.149 [2024-11-19 10:55:31.568473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.149 [2024-11-19 10:55:31.568522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.149 [2024-11-19 10:55:31.568535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.149 [2024-11-19 10:55:31.568542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.149 [2024-11-19 10:55:31.568548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.149 [2024-11-19 10:55:31.568563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.149 qpair failed and we were unable to recover it. 00:28:24.149 [2024-11-19 10:55:31.578525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.149 [2024-11-19 10:55:31.578582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.149 [2024-11-19 10:55:31.578596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.149 [2024-11-19 10:55:31.578604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.149 [2024-11-19 10:55:31.578610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.149 [2024-11-19 10:55:31.578625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.149 qpair failed and we were unable to recover it. 00:28:24.149 [2024-11-19 10:55:31.588495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.149 [2024-11-19 10:55:31.588546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.149 [2024-11-19 10:55:31.588558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.149 [2024-11-19 10:55:31.588565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.149 [2024-11-19 10:55:31.588572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.149 [2024-11-19 10:55:31.588588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.149 qpair failed and we were unable to recover it. 00:28:24.409 [2024-11-19 10:55:31.598556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.409 [2024-11-19 10:55:31.598628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.409 [2024-11-19 10:55:31.598644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.598651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.598657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.598672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.608544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.608596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.608608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.608615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.608621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.608637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.618590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.618647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.618660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.618667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.618673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.618687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.628611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.628686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.628699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.628707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.628713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.628728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.638636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.638689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.638702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.638712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.638718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.638733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.648675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.648730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.648743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.648750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.648757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.648772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.658644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.658734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.658747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.658754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.658761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.658775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.668723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.668780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.668793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.668800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.668807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.668822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.678765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.678839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.678853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.678861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.678866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.678887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.688778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.688834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.688848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.688855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.688862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.688877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.698801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.698855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.698870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.698877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.698884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.698899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.708834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.708890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.708903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.708910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.708917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.708932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.718876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.718933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.410 [2024-11-19 10:55:31.718951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.410 [2024-11-19 10:55:31.718959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.410 [2024-11-19 10:55:31.718965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.410 [2024-11-19 10:55:31.718981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.410 qpair failed and we were unable to recover it. 00:28:24.410 [2024-11-19 10:55:31.728903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.410 [2024-11-19 10:55:31.728975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.728989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.728997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.729003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.729018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.738961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.739020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.739034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.739041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.739048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.739065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.748978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.749073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.749086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.749093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.749100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.749116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.758986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.759042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.759056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.759063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.759070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.759085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.769065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.769120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.769133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.769144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.769152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.769167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.779035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.779090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.779103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.779110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.779117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.779132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.789083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.789138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.789152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.789159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.789166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.789181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.799112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.799169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.799182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.799190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.799196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.799211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.809131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.809189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.809205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.809212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.809219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.809237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.819151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.819203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.819216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.819223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.819230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.819245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.829185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.829260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.829274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.829281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.829287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.829302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.839146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.839200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.839213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.839221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.839228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.839243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.411 [2024-11-19 10:55:31.849286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.411 [2024-11-19 10:55:31.849345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.411 [2024-11-19 10:55:31.849358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.411 [2024-11-19 10:55:31.849366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.411 [2024-11-19 10:55:31.849373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.411 [2024-11-19 10:55:31.849388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.411 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.859300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.859354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.859368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.859375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.859382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.859397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.869299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.869348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.869362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.869369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.869376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.869391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.879322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.879381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.879395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.879402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.879409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.879424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.889345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.889398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.889412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.889419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.889426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.889441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.899394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.899450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.899467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.899475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.899481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.899496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.909334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.909393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.909407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.909414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.909421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.909435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.919447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.919503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.919518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.919526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.919533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.919548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.929452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.929509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.929523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.929530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.929536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.929552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.939498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.939551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.939564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.939572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.939582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.939596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.949532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.949585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.949599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.949606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.949612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.949628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.959565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.959619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.959633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.959640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.959647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.959661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.969596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.969664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.673 [2024-11-19 10:55:31.969678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.673 [2024-11-19 10:55:31.969685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.673 [2024-11-19 10:55:31.969691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.673 [2024-11-19 10:55:31.969706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.673 qpair failed and we were unable to recover it. 00:28:24.673 [2024-11-19 10:55:31.979605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.673 [2024-11-19 10:55:31.979663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:31.979677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:31.979684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:31.979691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:31.979706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:31.989640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:31.989694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:31.989708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:31.989715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:31.989722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:31.989736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:31.999686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:31.999750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:31.999764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:31.999771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:31.999778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:31.999793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.009709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.009764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.009779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.009786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.009792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.009807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.019730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.019809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.019823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.019831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.019837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.019852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.029754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.029833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.029851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.029859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.029865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.029880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.039792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.039850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.039863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.039871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.039877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.039892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.049822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.049880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.049895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.049902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.049909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.049924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.059865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.059927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.059940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.059952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.059959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.059974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.069876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.069932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.069946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.069957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.069968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.069983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.079929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.079995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.080011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.080019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.080026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.080041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.089994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.090095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.090109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.090116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.090122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.090137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.099971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.100025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.674 [2024-11-19 10:55:32.100039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.674 [2024-11-19 10:55:32.100045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.674 [2024-11-19 10:55:32.100052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.674 [2024-11-19 10:55:32.100067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.674 qpair failed and we were unable to recover it. 00:28:24.674 [2024-11-19 10:55:32.109982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.674 [2024-11-19 10:55:32.110034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.675 [2024-11-19 10:55:32.110048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.675 [2024-11-19 10:55:32.110055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.675 [2024-11-19 10:55:32.110062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.675 [2024-11-19 10:55:32.110077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.675 qpair failed and we were unable to recover it. 00:28:24.675 [2024-11-19 10:55:32.120023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.675 [2024-11-19 10:55:32.120081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.675 [2024-11-19 10:55:32.120095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.675 [2024-11-19 10:55:32.120103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.675 [2024-11-19 10:55:32.120109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.675 [2024-11-19 10:55:32.120125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.675 qpair failed and we were unable to recover it. 00:28:24.935 [2024-11-19 10:55:32.130046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.935 [2024-11-19 10:55:32.130119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.935 [2024-11-19 10:55:32.130133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.935 [2024-11-19 10:55:32.130140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.130146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.130161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.140127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.140181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.140194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.140202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.140208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.140223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.150098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.150158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.150172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.150179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.150186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.150201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.160137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.160215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.160229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.160236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.160242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.160257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.170204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.170253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.170267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.170274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.170280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.170295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.180183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.180241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.180255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.180263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.180270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.180285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.190217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.190288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.190302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.190309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.190316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.190331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.200173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.200231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.200245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.200255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.200262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.200277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.210323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.210381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.210394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.210401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.210409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.210424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.220220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.220279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.220293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.220300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.220306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.220321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.230316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.230368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.230381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.230388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.230395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.230410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.240356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.240410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.240424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.240431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.240438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.240456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.250385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.250438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.250452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.250459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.250466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.250481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.260407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.260461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.260474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.260481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.260488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.260503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.270437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.270489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.270503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.270510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.270517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.270531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.280471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.280526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.280539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.280546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.280552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.280568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.290493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.290550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.290564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.290571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.290578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.290593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.300516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.300568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.300581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.300589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.300595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.300610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.310552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.310607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.310620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.310628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.310635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.310650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.936 qpair failed and we were unable to recover it. 00:28:24.936 [2024-11-19 10:55:32.320587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.936 [2024-11-19 10:55:32.320641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.936 [2024-11-19 10:55:32.320654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.936 [2024-11-19 10:55:32.320661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.936 [2024-11-19 10:55:32.320668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.936 [2024-11-19 10:55:32.320683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.937 qpair failed and we were unable to recover it. 00:28:24.937 [2024-11-19 10:55:32.330608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.937 [2024-11-19 10:55:32.330665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.937 [2024-11-19 10:55:32.330678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.937 [2024-11-19 10:55:32.330689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.937 [2024-11-19 10:55:32.330695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.937 [2024-11-19 10:55:32.330709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.937 qpair failed and we were unable to recover it. 00:28:24.937 [2024-11-19 10:55:32.340635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.937 [2024-11-19 10:55:32.340688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.937 [2024-11-19 10:55:32.340702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.937 [2024-11-19 10:55:32.340709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.937 [2024-11-19 10:55:32.340716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.937 [2024-11-19 10:55:32.340731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.937 qpair failed and we were unable to recover it. 00:28:24.937 [2024-11-19 10:55:32.350672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.937 [2024-11-19 10:55:32.350726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.937 [2024-11-19 10:55:32.350740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.937 [2024-11-19 10:55:32.350747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.937 [2024-11-19 10:55:32.350754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.937 [2024-11-19 10:55:32.350770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.937 qpair failed and we were unable to recover it. 00:28:24.937 [2024-11-19 10:55:32.360697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.937 [2024-11-19 10:55:32.360755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.937 [2024-11-19 10:55:32.360769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.937 [2024-11-19 10:55:32.360776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.937 [2024-11-19 10:55:32.360783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.937 [2024-11-19 10:55:32.360798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.937 qpair failed and we were unable to recover it. 00:28:24.937 [2024-11-19 10:55:32.370749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.937 [2024-11-19 10:55:32.370823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.937 [2024-11-19 10:55:32.370836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.937 [2024-11-19 10:55:32.370843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.937 [2024-11-19 10:55:32.370850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.937 [2024-11-19 10:55:32.370867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.937 qpair failed and we were unable to recover it. 00:28:24.937 [2024-11-19 10:55:32.380746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:24.937 [2024-11-19 10:55:32.380817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:24.937 [2024-11-19 10:55:32.380832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:24.937 [2024-11-19 10:55:32.380839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:24.937 [2024-11-19 10:55:32.380846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:24.937 [2024-11-19 10:55:32.380861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:24.937 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.390783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.390840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.390854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.390861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.390868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.390882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.400796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.400853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.400867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.400874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.400881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.400895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.410834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.410924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.410938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.410946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.410957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.410972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.420903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.420978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.420992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.420999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.421005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.421020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.430885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.430939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.430957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.430965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.430971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.430986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.440934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.441002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.441016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.441023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.441030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.441045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.450968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.451024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.451038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.451045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.451051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.451067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.460978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.461034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.461053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.461061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.461068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.461083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.471017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.198 [2024-11-19 10:55:32.471073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.198 [2024-11-19 10:55:32.471087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.198 [2024-11-19 10:55:32.471094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.198 [2024-11-19 10:55:32.471101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.198 [2024-11-19 10:55:32.471115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.198 qpair failed and we were unable to recover it. 00:28:25.198 [2024-11-19 10:55:32.481084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.481144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.481157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.481165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.481172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.481187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.491111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.491169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.491182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.491189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.491196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.491211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.501117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.501172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.501185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.501192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.501202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.501218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.511127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.511181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.511194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.511201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.511208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.511223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.521163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.521221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.521235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.521243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.521250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.521264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.531142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.531195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.531209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.531216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.531222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.531237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.541145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.541202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.541216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.541223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.541230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.541245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.551241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.551317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.551332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.551339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.551346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.551360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.561317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.561376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.561390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.561397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.561403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.561418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.571305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.571364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.571378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.571386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.571392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.571407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.581326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.581400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.581415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.581423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.581429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.581444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.591307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.591363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.591380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.591387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.591394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.591408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.199 [2024-11-19 10:55:32.601401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.199 [2024-11-19 10:55:32.601459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.199 [2024-11-19 10:55:32.601474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.199 [2024-11-19 10:55:32.601481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.199 [2024-11-19 10:55:32.601488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.199 [2024-11-19 10:55:32.601503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.199 qpair failed and we were unable to recover it. 00:28:25.200 [2024-11-19 10:55:32.611426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.200 [2024-11-19 10:55:32.611491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.200 [2024-11-19 10:55:32.611506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.200 [2024-11-19 10:55:32.611514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.200 [2024-11-19 10:55:32.611520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.200 [2024-11-19 10:55:32.611536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.200 qpair failed and we were unable to recover it. 00:28:25.200 [2024-11-19 10:55:32.621469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.200 [2024-11-19 10:55:32.621522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.200 [2024-11-19 10:55:32.621536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.200 [2024-11-19 10:55:32.621543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.200 [2024-11-19 10:55:32.621550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.200 [2024-11-19 10:55:32.621565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.200 qpair failed and we were unable to recover it. 00:28:25.200 [2024-11-19 10:55:32.631501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.200 [2024-11-19 10:55:32.631563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.200 [2024-11-19 10:55:32.631578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.200 [2024-11-19 10:55:32.631585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.200 [2024-11-19 10:55:32.631595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.200 [2024-11-19 10:55:32.631610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.200 qpair failed and we were unable to recover it. 00:28:25.200 [2024-11-19 10:55:32.641445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.200 [2024-11-19 10:55:32.641498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.200 [2024-11-19 10:55:32.641512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.200 [2024-11-19 10:55:32.641519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.200 [2024-11-19 10:55:32.641526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.200 [2024-11-19 10:55:32.641542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.200 qpair failed and we were unable to recover it. 00:28:25.461 [2024-11-19 10:55:32.651525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.461 [2024-11-19 10:55:32.651598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.461 [2024-11-19 10:55:32.651613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.461 [2024-11-19 10:55:32.651620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.461 [2024-11-19 10:55:32.651626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.461 [2024-11-19 10:55:32.651642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.461 qpair failed and we were unable to recover it. 00:28:25.461 [2024-11-19 10:55:32.661507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.461 [2024-11-19 10:55:32.661555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.461 [2024-11-19 10:55:32.661569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.461 [2024-11-19 10:55:32.661576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.461 [2024-11-19 10:55:32.661582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.461 [2024-11-19 10:55:32.661597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.461 qpair failed and we were unable to recover it. 00:28:25.461 [2024-11-19 10:55:32.671556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.461 [2024-11-19 10:55:32.671612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.461 [2024-11-19 10:55:32.671628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.461 [2024-11-19 10:55:32.671635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.461 [2024-11-19 10:55:32.671642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.461 [2024-11-19 10:55:32.671657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.461 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.681587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.681646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.681659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.681667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.681674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.681690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.691609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.691664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.691677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.691685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.691692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.691707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.701787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.701847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.701861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.701869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.701876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.701891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.711643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.711694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.711708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.711716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.711722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.711737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.721707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.721773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.721787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.721794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.721801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.721816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.731743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.731821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.731836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.731843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.731849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.731864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.741795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.741854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.741868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.741875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.741881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.741896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.751803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.751862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.751877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.751885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.751892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.751907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.761847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.761915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.761930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.761941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.761951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.761966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.771804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.771877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.771891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.771898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.771905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.771920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.781824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.781881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.781895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.781903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.781910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.781925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.791878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.791946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.791964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.791972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.791979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.462 [2024-11-19 10:55:32.791994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.462 qpair failed and we were unable to recover it. 00:28:25.462 [2024-11-19 10:55:32.802001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.462 [2024-11-19 10:55:32.802059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.462 [2024-11-19 10:55:32.802074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.462 [2024-11-19 10:55:32.802081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.462 [2024-11-19 10:55:32.802087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.802107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.811993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.812050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.812063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.812070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.812077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.812093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.821963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.822018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.822032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.822039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.822046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.822061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.831971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.832039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.832054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.832061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.832069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.832084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.842062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.842119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.842132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.842140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.842146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.842162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.852044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.852109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.852123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.852130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.852137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.852151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.862187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.862238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.862251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.862258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.862265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.862280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.872091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.872146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.872160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.872167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.872174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.872189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.882194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.882253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.882267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.882275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.882281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.882297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.892276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.892329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.892343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.892353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.892361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.892376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.463 [2024-11-19 10:55:32.902171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.463 [2024-11-19 10:55:32.902235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.463 [2024-11-19 10:55:32.902249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.463 [2024-11-19 10:55:32.902256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.463 [2024-11-19 10:55:32.902262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.463 [2024-11-19 10:55:32.902277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.463 qpair failed and we were unable to recover it. 00:28:25.725 [2024-11-19 10:55:32.912271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.725 [2024-11-19 10:55:32.912324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.725 [2024-11-19 10:55:32.912338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.725 [2024-11-19 10:55:32.912345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.725 [2024-11-19 10:55:32.912352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.725 [2024-11-19 10:55:32.912367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.725 qpair failed and we were unable to recover it. 00:28:25.725 [2024-11-19 10:55:32.922299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.725 [2024-11-19 10:55:32.922375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.725 [2024-11-19 10:55:32.922389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.725 [2024-11-19 10:55:32.922396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.725 [2024-11-19 10:55:32.922403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.725 [2024-11-19 10:55:32.922418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.725 qpair failed and we were unable to recover it. 00:28:25.725 [2024-11-19 10:55:32.932330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.725 [2024-11-19 10:55:32.932387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.725 [2024-11-19 10:55:32.932400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.725 [2024-11-19 10:55:32.932407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.725 [2024-11-19 10:55:32.932414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.725 [2024-11-19 10:55:32.932432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.725 qpair failed and we were unable to recover it. 00:28:25.725 [2024-11-19 10:55:32.942307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.725 [2024-11-19 10:55:32.942362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.725 [2024-11-19 10:55:32.942376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.725 [2024-11-19 10:55:32.942383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.725 [2024-11-19 10:55:32.942390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.725 [2024-11-19 10:55:32.942405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.725 qpair failed and we were unable to recover it. 00:28:25.725 [2024-11-19 10:55:32.952365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.725 [2024-11-19 10:55:32.952428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:32.952443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:32.952450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:32.952456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:32.952471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:32.962363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:32.962418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:32.962431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:32.962438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:32.962445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:32.962460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:32.972390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:32.972447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:32.972462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:32.972469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:32.972476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:32.972491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:32.982505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:32.982559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:32.982573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:32.982580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:32.982587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:32.982602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:32.992441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:32.992497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:32.992510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:32.992517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:32.992524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:32.992538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:33.002549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:33.002603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:33.002617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:33.002624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:33.002631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:33.002645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:33.012574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:33.012622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:33.012636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:33.012643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:33.012649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:33.012664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:33.022620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:33.022673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:33.022689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:33.022697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:33.022704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:33.022718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:33.032639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:33.032692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:33.032705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:33.032712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:33.032719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:33.032733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:33.042668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:33.042724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:33.042738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:33.042745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:33.042752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:33.042767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:33.052719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:33.052798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:33.052812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:33.052819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:33.052825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:33.052840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:33.062736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.726 [2024-11-19 10:55:33.062800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.726 [2024-11-19 10:55:33.062813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.726 [2024-11-19 10:55:33.062821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.726 [2024-11-19 10:55:33.062830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.726 [2024-11-19 10:55:33.062846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.726 qpair failed and we were unable to recover it. 00:28:25.726 [2024-11-19 10:55:33.072763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.072817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.072831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.072838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.072844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.072860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.082776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.082831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.082844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.082852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.082858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.082874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.092801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.092879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.092894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.092901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.092908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.092922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.102825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.102876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.102891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.102898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.102904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.102919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.112798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.112853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.112867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.112874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.112881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.112896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.122865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.122951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.122965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.122972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.122978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.122994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.132998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.133059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.133074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.133081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.133087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.133103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.142940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.143011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.143026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.143033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.143039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.143054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.152968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.153024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.153041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.153049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.153055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.153070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.727 [2024-11-19 10:55:33.162996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.727 [2024-11-19 10:55:33.163050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.727 [2024-11-19 10:55:33.163063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.727 [2024-11-19 10:55:33.163070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.727 [2024-11-19 10:55:33.163076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.727 [2024-11-19 10:55:33.163092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.727 qpair failed and we were unable to recover it. 00:28:25.988 [2024-11-19 10:55:33.173028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.988 [2024-11-19 10:55:33.173084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.988 [2024-11-19 10:55:33.173097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.988 [2024-11-19 10:55:33.173104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.988 [2024-11-19 10:55:33.173110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.988 [2024-11-19 10:55:33.173125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.988 qpair failed and we were unable to recover it. 00:28:25.988 [2024-11-19 10:55:33.183054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.988 [2024-11-19 10:55:33.183118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.988 [2024-11-19 10:55:33.183131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.988 [2024-11-19 10:55:33.183139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.988 [2024-11-19 10:55:33.183145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.988 [2024-11-19 10:55:33.183161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.193115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.193168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.193181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.193188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.193200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.193215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.203112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.203166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.203179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.203186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.203193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.203208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.213146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.213200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.213214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.213221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.213228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.213243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.223211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.223311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.223325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.223332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.223339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.223354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.233202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.233258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.233271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.233279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.233286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.233301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.243224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.243295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.243308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.243315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.243322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.243337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.253262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.253315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.253329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.253336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.253342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.253357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.263319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.263380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.263393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.263401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.263408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.263423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.273317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.273371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.273385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.273392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.273398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.273412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.283371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.283442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.283455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.283463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.283469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.283484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.293371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.293426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.293439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.293445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.293452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.293467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.303445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.303499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.303512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.303519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.989 [2024-11-19 10:55:33.303526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.989 [2024-11-19 10:55:33.303541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.989 qpair failed and we were unable to recover it. 00:28:25.989 [2024-11-19 10:55:33.313419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.989 [2024-11-19 10:55:33.313477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.989 [2024-11-19 10:55:33.313492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.989 [2024-11-19 10:55:33.313498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.313506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.313522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.323452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.323511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.323525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.323535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.323542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.323556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.333509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.333581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.333594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.333602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.333608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.333623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.343505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.343553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.343566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.343573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.343580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.343595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.353565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.353618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.353631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.353638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.353644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.353659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.363606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.363671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.363684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.363692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.363699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.363716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.373598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.373648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.373661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.373668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.373675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.373690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.383642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.383698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.383711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.383719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.383726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.383741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.393696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.393805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.393819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.393826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.393833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.393848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.403689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.403743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.403756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.403763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.403770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.403785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.413642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.413703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.413717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.413724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.413730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.413744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.423768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.423824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.423837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.423845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.423851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.423866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:25.990 [2024-11-19 10:55:33.433768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:25.990 [2024-11-19 10:55:33.433820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:25.990 [2024-11-19 10:55:33.433833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:25.990 [2024-11-19 10:55:33.433839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:25.990 [2024-11-19 10:55:33.433846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:25.990 [2024-11-19 10:55:33.433861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:25.990 qpair failed and we were unable to recover it. 00:28:26.251 [2024-11-19 10:55:33.443803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.251 [2024-11-19 10:55:33.443860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.251 [2024-11-19 10:55:33.443875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.251 [2024-11-19 10:55:33.443883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.251 [2024-11-19 10:55:33.443890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.251 [2024-11-19 10:55:33.443906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.251 qpair failed and we were unable to recover it. 00:28:26.251 [2024-11-19 10:55:33.453836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.251 [2024-11-19 10:55:33.453895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.251 [2024-11-19 10:55:33.453913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.251 [2024-11-19 10:55:33.453920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.251 [2024-11-19 10:55:33.453927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.251 [2024-11-19 10:55:33.453942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.251 qpair failed and we were unable to recover it. 00:28:26.251 [2024-11-19 10:55:33.463894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.463959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.463973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.463981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.463987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.464002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.473879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.473934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.473952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.473959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.473966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.473981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.483902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.483965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.483979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.483987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.483993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.484008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.493964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.494038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.494053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.494061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.494068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.494088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.503889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.503976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.503989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.503996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.504003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.504018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.513988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.514045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.514058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.514065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.514072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.514088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.524025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.524080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.524093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.524100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.524107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.524122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.534058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.534122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.534136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.534144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.534150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.534166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.544089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.544159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.544181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.544188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.544195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.544214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.554125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.554177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.554191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.554198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.554205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.554221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.564160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.564226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.564239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.564247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.564254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.564269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.574171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.574227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.574241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.574248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.574255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.574271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.584206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.252 [2024-11-19 10:55:33.584263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.252 [2024-11-19 10:55:33.584280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.252 [2024-11-19 10:55:33.584288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.252 [2024-11-19 10:55:33.584294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.252 [2024-11-19 10:55:33.584310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.252 qpair failed and we were unable to recover it. 00:28:26.252 [2024-11-19 10:55:33.594230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.594292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.594306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.594314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.594321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.253 [2024-11-19 10:55:33.594336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.604245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.604309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.604323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.604330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.604337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.253 [2024-11-19 10:55:33.604352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.614279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.614335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.614348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.614355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.614362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.253 [2024-11-19 10:55:33.614378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.624302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.624354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.624367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.624374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.624384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd8000b90 00:28:26.253 [2024-11-19 10:55:33.624399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.634352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.634456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.634513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.634539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.634561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd4000b90 00:28:26.253 [2024-11-19 10:55:33.634613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.644391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.644508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.644538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.644554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.644568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6dd4000b90 00:28:26.253 [2024-11-19 10:55:33.644599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.654401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.654497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.654554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.654581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.654604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6de0000b90 00:28:26.253 [2024-11-19 10:55:33.654656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.664455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.664569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.664601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.664617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.664630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6de0000b90 00:28:26.253 [2024-11-19 10:55:33.664662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.664836] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:26.253 A controller has encountered a failure and is being reset. 00:28:26.253 [2024-11-19 10:55:33.674448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.674550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.674612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.674638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.674661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22c8ba0 00:28:26.253 [2024-11-19 10:55:33.674713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.253 [2024-11-19 10:55:33.684491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.253 [2024-11-19 10:55:33.684566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.253 [2024-11-19 10:55:33.684596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.253 [2024-11-19 10:55:33.684611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.253 [2024-11-19 10:55:33.684624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22c8ba0 00:28:26.253 [2024-11-19 10:55:33.684654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.253 qpair failed and we were unable to recover it. 00:28:26.513 Controller properly reset. 00:28:26.513 Initializing NVMe Controllers 00:28:26.513 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:26.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:26.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:26.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:26.513 Initialization complete. Launching workers. 00:28:26.513 Starting thread on core 1 00:28:26.513 Starting thread on core 2 00:28:26.513 Starting thread on core 3 00:28:26.513 Starting thread on core 0 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:26.513 00:28:26.513 real 0m10.867s 00:28:26.513 user 0m19.653s 00:28:26.513 sys 0m4.672s 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.513 ************************************ 00:28:26.513 END TEST nvmf_target_disconnect_tc2 00:28:26.513 ************************************ 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:26.513 rmmod nvme_tcp 00:28:26.513 rmmod nvme_fabrics 00:28:26.513 rmmod nvme_keyring 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1851646 ']' 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1851646 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1851646 ']' 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1851646 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.513 10:55:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1851646 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1851646' 00:28:26.773 killing process with pid 1851646 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1851646 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1851646 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.773 10:55:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.311 10:55:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.311 00:28:29.311 real 0m19.647s 00:28:29.311 user 0m47.631s 00:28:29.311 sys 0m9.567s 00:28:29.311 10:55:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.311 10:55:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.311 ************************************ 00:28:29.311 END TEST nvmf_target_disconnect 00:28:29.311 ************************************ 00:28:29.311 10:55:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:29.311 00:28:29.311 real 5m50.839s 00:28:29.311 user 10m31.839s 00:28:29.311 sys 1m57.859s 00:28:29.311 10:55:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.311 10:55:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.311 ************************************ 00:28:29.311 END TEST nvmf_host 00:28:29.311 ************************************ 00:28:29.311 10:55:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:29.311 10:55:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:29.311 10:55:36 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:29.311 10:55:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:29.311 10:55:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.311 10:55:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.311 ************************************ 00:28:29.311 START TEST nvmf_target_core_interrupt_mode 00:28:29.311 ************************************ 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:29.311 * Looking for test storage... 00:28:29.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:29.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.311 --rc genhtml_branch_coverage=1 00:28:29.311 --rc genhtml_function_coverage=1 00:28:29.311 --rc genhtml_legend=1 00:28:29.311 --rc geninfo_all_blocks=1 00:28:29.311 --rc geninfo_unexecuted_blocks=1 00:28:29.311 00:28:29.311 ' 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:29.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.311 --rc genhtml_branch_coverage=1 00:28:29.311 --rc genhtml_function_coverage=1 00:28:29.311 --rc genhtml_legend=1 00:28:29.311 --rc geninfo_all_blocks=1 00:28:29.311 --rc geninfo_unexecuted_blocks=1 00:28:29.311 00:28:29.311 ' 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:29.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.311 --rc genhtml_branch_coverage=1 00:28:29.311 --rc genhtml_function_coverage=1 00:28:29.311 --rc genhtml_legend=1 00:28:29.311 --rc geninfo_all_blocks=1 00:28:29.311 --rc geninfo_unexecuted_blocks=1 00:28:29.311 00:28:29.311 ' 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:29.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.311 --rc genhtml_branch_coverage=1 00:28:29.311 --rc genhtml_function_coverage=1 00:28:29.311 --rc genhtml_legend=1 00:28:29.311 --rc geninfo_all_blocks=1 00:28:29.311 --rc geninfo_unexecuted_blocks=1 00:28:29.311 00:28:29.311 ' 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.311 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:29.312 ************************************ 00:28:29.312 START TEST nvmf_abort 00:28:29.312 ************************************ 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:29.312 * Looking for test storage... 00:28:29.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:28:29.312 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:29.572 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:29.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.573 --rc genhtml_branch_coverage=1 00:28:29.573 --rc genhtml_function_coverage=1 00:28:29.573 --rc genhtml_legend=1 00:28:29.573 --rc geninfo_all_blocks=1 00:28:29.573 --rc geninfo_unexecuted_blocks=1 00:28:29.573 00:28:29.573 ' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:29.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.573 --rc genhtml_branch_coverage=1 00:28:29.573 --rc genhtml_function_coverage=1 00:28:29.573 --rc genhtml_legend=1 00:28:29.573 --rc geninfo_all_blocks=1 00:28:29.573 --rc geninfo_unexecuted_blocks=1 00:28:29.573 00:28:29.573 ' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:29.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.573 --rc genhtml_branch_coverage=1 00:28:29.573 --rc genhtml_function_coverage=1 00:28:29.573 --rc genhtml_legend=1 00:28:29.573 --rc geninfo_all_blocks=1 00:28:29.573 --rc geninfo_unexecuted_blocks=1 00:28:29.573 00:28:29.573 ' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:29.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.573 --rc genhtml_branch_coverage=1 00:28:29.573 --rc genhtml_function_coverage=1 00:28:29.573 --rc genhtml_legend=1 00:28:29.573 --rc geninfo_all_blocks=1 00:28:29.573 --rc geninfo_unexecuted_blocks=1 00:28:29.573 00:28:29.573 ' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:29.573 10:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:36.146 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.146 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:36.147 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:36.147 Found net devices under 0000:86:00.0: cvl_0_0 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:36.147 Found net devices under 0000:86:00.1: cvl_0_1 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:28:36.147 00:28:36.147 --- 10.0.0.2 ping statistics --- 00:28:36.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.147 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:28:36.147 00:28:36.147 --- 10.0.0.1 ping statistics --- 00:28:36.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.147 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1856177 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1856177 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1856177 ']' 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.147 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 [2024-11-19 10:55:42.735507] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:36.148 [2024-11-19 10:55:42.736454] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:28:36.148 [2024-11-19 10:55:42.736492] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.148 [2024-11-19 10:55:42.817760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:36.148 [2024-11-19 10:55:42.859830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.148 [2024-11-19 10:55:42.859867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.148 [2024-11-19 10:55:42.859874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.148 [2024-11-19 10:55:42.859880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.148 [2024-11-19 10:55:42.859885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.148 [2024-11-19 10:55:42.861326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.148 [2024-11-19 10:55:42.861432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.148 [2024-11-19 10:55:42.861434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.148 [2024-11-19 10:55:42.929794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:36.148 [2024-11-19 10:55:42.930635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:36.148 [2024-11-19 10:55:42.930844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:36.148 [2024-11-19 10:55:42.930995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.148 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 [2024-11-19 10:55:42.998326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 Malloc0 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 Delay0 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 [2024-11-19 10:55:43.094219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.148 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:36.148 [2024-11-19 10:55:43.225603] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:38.055 Initializing NVMe Controllers 00:28:38.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:38.055 controller IO queue size 128 less than required 00:28:38.055 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:38.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:38.055 Initialization complete. Launching workers. 00:28:38.055 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36971 00:28:38.055 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37032, failed to submit 66 00:28:38.056 success 36971, unsuccessful 61, failed 0 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.056 rmmod nvme_tcp 00:28:38.056 rmmod nvme_fabrics 00:28:38.056 rmmod nvme_keyring 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1856177 ']' 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1856177 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1856177 ']' 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1856177 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856177 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856177' 00:28:38.056 killing process with pid 1856177 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1856177 00:28:38.056 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1856177 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.315 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:40.857 00:28:40.857 real 0m11.059s 00:28:40.857 user 0m10.218s 00:28:40.857 sys 0m5.764s 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:40.857 ************************************ 00:28:40.857 END TEST nvmf_abort 00:28:40.857 ************************************ 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:40.857 ************************************ 00:28:40.857 START TEST nvmf_ns_hotplug_stress 00:28:40.857 ************************************ 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:40.857 * Looking for test storage... 00:28:40.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.857 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:40.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.858 --rc genhtml_branch_coverage=1 00:28:40.858 --rc genhtml_function_coverage=1 00:28:40.858 --rc genhtml_legend=1 00:28:40.858 --rc geninfo_all_blocks=1 00:28:40.858 --rc geninfo_unexecuted_blocks=1 00:28:40.858 00:28:40.858 ' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:40.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.858 --rc genhtml_branch_coverage=1 00:28:40.858 --rc genhtml_function_coverage=1 00:28:40.858 --rc genhtml_legend=1 00:28:40.858 --rc geninfo_all_blocks=1 00:28:40.858 --rc geninfo_unexecuted_blocks=1 00:28:40.858 00:28:40.858 ' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:40.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.858 --rc genhtml_branch_coverage=1 00:28:40.858 --rc genhtml_function_coverage=1 00:28:40.858 --rc genhtml_legend=1 00:28:40.858 --rc geninfo_all_blocks=1 00:28:40.858 --rc geninfo_unexecuted_blocks=1 00:28:40.858 00:28:40.858 ' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:40.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.858 --rc genhtml_branch_coverage=1 00:28:40.858 --rc genhtml_function_coverage=1 00:28:40.858 --rc genhtml_legend=1 00:28:40.858 --rc geninfo_all_blocks=1 00:28:40.858 --rc geninfo_unexecuted_blocks=1 00:28:40.858 00:28:40.858 ' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.858 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:46.139 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:46.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.139 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:46.140 Found net devices under 0000:86:00.0: cvl_0_0 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:46.140 Found net devices under 0000:86:00.1: cvl_0_1 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.140 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:28:46.401 00:28:46.401 --- 10.0.0.2 ping statistics --- 00:28:46.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.401 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:28:46.401 00:28:46.401 --- 10.0.0.1 ping statistics --- 00:28:46.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.401 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1860168 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1860168 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1860168 ']' 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.401 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:46.661 [2024-11-19 10:55:53.875769] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:46.661 [2024-11-19 10:55:53.876741] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:28:46.661 [2024-11-19 10:55:53.876777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.661 [2024-11-19 10:55:53.954482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:46.661 [2024-11-19 10:55:53.996415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.661 [2024-11-19 10:55:53.996449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.661 [2024-11-19 10:55:53.996456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.661 [2024-11-19 10:55:53.996462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.661 [2024-11-19 10:55:53.996468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.661 [2024-11-19 10:55:53.997954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.661 [2024-11-19 10:55:53.997840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.661 [2024-11-19 10:55:53.997966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.661 [2024-11-19 10:55:54.066332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:46.661 [2024-11-19 10:55:54.067202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:46.661 [2024-11-19 10:55:54.067493] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:46.661 [2024-11-19 10:55:54.067634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:46.661 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.661 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:46.661 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.661 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.661 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:46.921 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.921 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:46.921 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:46.921 [2024-11-19 10:55:54.302685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.921 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:47.180 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.439 [2024-11-19 10:55:54.718999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.439 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:47.698 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:47.698 Malloc0 00:28:47.957 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:47.957 Delay0 00:28:47.957 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.216 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:48.475 NULL1 00:28:48.475 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:48.734 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1860447 00:28:48.734 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:48.734 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:48.734 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.672 Read completed with error (sct=0, sc=11) 00:28:49.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.672 10:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.931 10:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:49.931 10:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:50.190 true 00:28:50.190 10:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:50.190 10:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.125 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.125 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:51.125 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:51.390 true 00:28:51.390 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:51.390 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.649 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.908 10:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:51.908 10:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:51.908 true 00:28:52.166 10:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:52.166 10:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.101 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.360 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:53.360 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:53.619 true 00:28:53.619 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:53.619 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.445 10:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:54.445 10:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:54.445 10:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:54.704 true 00:28:54.704 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:54.704 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.963 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.222 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:55.222 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:55.222 true 00:28:55.222 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:55.222 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.599 10:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.599 10:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:56.599 10:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:56.857 true 00:28:56.857 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:56.857 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.116 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.376 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:57.376 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:57.376 true 00:28:57.376 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:57.376 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.819 10:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.819 10:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:58.819 10:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:59.078 true 00:28:59.078 10:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:28:59.078 10:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.016 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.016 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:00.016 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:00.275 true 00:29:00.275 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:00.275 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.534 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.534 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:00.534 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:00.792 true 00:29:00.792 10:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:00.792 10:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.170 10:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.170 10:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:02.170 10:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:02.428 true 00:29:02.428 10:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:02.428 10:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.364 10:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.364 10:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:03.364 10:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:03.623 true 00:29:03.623 10:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:03.623 10:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.882 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.882 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:03.882 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:04.141 true 00:29:04.141 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:04.141 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.335 10:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.335 10:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:05.335 10:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:05.593 true 00:29:05.593 10:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:05.593 10:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.530 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.789 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:06.789 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:06.789 true 00:29:06.789 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:06.789 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.047 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.306 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:07.306 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:07.306 true 00:29:07.564 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:07.564 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.499 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.757 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:08.757 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:09.014 true 00:29:09.014 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:09.014 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.949 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.949 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:09.949 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:10.208 true 00:29:10.208 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:10.208 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.466 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.725 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:10.725 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:10.725 true 00:29:10.983 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:10.983 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.923 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.184 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:12.184 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:12.443 true 00:29:12.443 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:12.443 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.378 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.378 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:13.378 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:13.636 true 00:29:13.636 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:13.636 10:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.894 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.895 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:13.895 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:14.154 true 00:29:14.154 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:14.154 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:15.090 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.349 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:15.349 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:15.608 true 00:29:15.608 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:15.608 10:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.867 10:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.126 10:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:16.126 10:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:16.126 true 00:29:16.126 10:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:16.126 10:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.503 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.503 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:17.503 10:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:17.762 true 00:29:17.762 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:17.762 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.697 10:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.697 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:18.697 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:18.955 Initializing NVMe Controllers 00:29:18.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.955 Controller IO queue size 128, less than required. 00:29:18.955 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.955 Controller IO queue size 128, less than required. 00:29:18.955 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:18.955 Initialization complete. Launching workers. 00:29:18.955 ======================================================== 00:29:18.955 Latency(us) 00:29:18.955 Device Information : IOPS MiB/s Average min max 00:29:18.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2029.13 0.99 43623.14 2770.71 1035398.81 00:29:18.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17853.76 8.72 7169.39 1603.80 308728.59 00:29:18.955 ======================================================== 00:29:18.955 Total : 19882.89 9.71 10889.65 1603.80 1035398.81 00:29:18.955 00:29:18.955 true 00:29:18.955 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1860447 00:29:18.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1860447) - No such process 00:29:18.955 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1860447 00:29:18.955 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.213 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:19.472 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:19.472 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:19.472 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:19.472 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.472 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:19.472 null0 00:29:19.472 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.472 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.472 10:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:19.731 null1 00:29:19.731 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.731 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.731 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:19.989 null2 00:29:19.989 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.989 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.990 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:19.990 null3 00:29:20.248 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:20.248 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:20.248 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:20.248 null4 00:29:20.248 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:20.248 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:20.248 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:20.507 null5 00:29:20.507 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:20.507 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:20.507 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:20.766 null6 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:20.766 null7 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.766 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:20.767 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1865763 1865764 1865766 1865769 1865770 1865772 1865774 1865775 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.026 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.284 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.542 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.542 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.542 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.542 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.542 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.542 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.542 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.542 10:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.866 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.866 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.867 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.127 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:22.385 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.385 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.385 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.385 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.385 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.385 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.385 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.385 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.643 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.901 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:23.160 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.419 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.420 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:23.679 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:23.679 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:23.679 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:23.679 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:23.679 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:23.679 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:23.679 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.679 10:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:23.937 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.937 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.937 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:23.937 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.937 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.937 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:23.937 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.937 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:23.938 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.197 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:24.455 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:24.455 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:24.455 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:24.455 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:24.455 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:24.456 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:24.456 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:24.456 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.715 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.974 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.234 rmmod nvme_tcp 00:29:25.234 rmmod nvme_fabrics 00:29:25.234 rmmod nvme_keyring 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1860168 ']' 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1860168 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1860168 ']' 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1860168 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860168 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860168' 00:29:25.234 killing process with pid 1860168 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1860168 00:29:25.234 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1860168 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.494 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.401 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.401 00:29:27.401 real 0m47.037s 00:29:27.401 user 2m55.853s 00:29:27.401 sys 0m19.760s 00:29:27.401 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.401 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:27.401 ************************************ 00:29:27.401 END TEST nvmf_ns_hotplug_stress 00:29:27.401 ************************************ 00:29:27.401 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:27.401 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:27.401 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.401 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:27.661 ************************************ 00:29:27.661 START TEST nvmf_delete_subsystem 00:29:27.661 ************************************ 00:29:27.661 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:27.661 * Looking for test storage... 00:29:27.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:27.661 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:27.661 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:29:27.661 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:27.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.661 --rc genhtml_branch_coverage=1 00:29:27.661 --rc genhtml_function_coverage=1 00:29:27.661 --rc genhtml_legend=1 00:29:27.661 --rc geninfo_all_blocks=1 00:29:27.661 --rc geninfo_unexecuted_blocks=1 00:29:27.661 00:29:27.661 ' 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:27.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.661 --rc genhtml_branch_coverage=1 00:29:27.661 --rc genhtml_function_coverage=1 00:29:27.661 --rc genhtml_legend=1 00:29:27.661 --rc geninfo_all_blocks=1 00:29:27.661 --rc geninfo_unexecuted_blocks=1 00:29:27.661 00:29:27.661 ' 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:27.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.661 --rc genhtml_branch_coverage=1 00:29:27.661 --rc genhtml_function_coverage=1 00:29:27.661 --rc genhtml_legend=1 00:29:27.661 --rc geninfo_all_blocks=1 00:29:27.661 --rc geninfo_unexecuted_blocks=1 00:29:27.661 00:29:27.661 ' 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:27.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.661 --rc genhtml_branch_coverage=1 00:29:27.661 --rc genhtml_function_coverage=1 00:29:27.661 --rc genhtml_legend=1 00:29:27.661 --rc geninfo_all_blocks=1 00:29:27.661 --rc geninfo_unexecuted_blocks=1 00:29:27.661 00:29:27.661 ' 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.661 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.662 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:34.233 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:34.233 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.233 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:34.233 Found net devices under 0000:86:00.0: cvl_0_0 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:34.234 Found net devices under 0000:86:00.1: cvl_0_1 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:29:34.234 00:29:34.234 --- 10.0.0.2 ping statistics --- 00:29:34.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.234 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:29:34.234 00:29:34.234 --- 10.0.0.1 ping statistics --- 00:29:34.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.234 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.234 10:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1870131 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1870131 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1870131 ']' 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.234 [2024-11-19 10:56:41.051825] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:34.234 [2024-11-19 10:56:41.052763] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:34.234 [2024-11-19 10:56:41.052796] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.234 [2024-11-19 10:56:41.131576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:34.234 [2024-11-19 10:56:41.172956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.234 [2024-11-19 10:56:41.172994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.234 [2024-11-19 10:56:41.173002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.234 [2024-11-19 10:56:41.173008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.234 [2024-11-19 10:56:41.173013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.234 [2024-11-19 10:56:41.174224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.234 [2024-11-19 10:56:41.174225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.234 [2024-11-19 10:56:41.240828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:34.234 [2024-11-19 10:56:41.241402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:34.234 [2024-11-19 10:56:41.241613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.234 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.234 [2024-11-19 10:56:41.304305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.235 [2024-11-19 10:56:41.335347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.235 NULL1 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.235 Delay0 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1870158 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:34.235 10:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:34.235 [2024-11-19 10:56:41.448015] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:36.139 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.139 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.139 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 starting I/O failed: -6 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 [2024-11-19 10:56:43.644713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b84a0 is same with the state(6) to be set 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Write completed with error (sct=0, sc=8) 00:29:36.398 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 [2024-11-19 10:56:43.645407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8680 is same with the state(6) to be set 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 starting I/O failed: -6 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 [2024-11-19 10:56:43.649117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7074000c40 is same with the state(6) to be set 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Write completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:36.399 Read completed with error (sct=0, sc=8) 00:29:37.337 [2024-11-19 10:56:44.625943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b99a0 is same with the state(6) to be set 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 [2024-11-19 10:56:44.647993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b82c0 is same with the state(6) to be set 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 [2024-11-19 10:56:44.648439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8860 is same with the state(6) to be set 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 [2024-11-19 10:56:44.651939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f707400d020 is same with the state(6) to be set 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Write completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 Read completed with error (sct=0, sc=8) 00:29:37.337 [2024-11-19 10:56:44.652583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f707400d680 is same with the state(6) to be set 00:29:37.337 Initializing NVMe Controllers 00:29:37.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.337 Controller IO queue size 128, less than required. 00:29:37.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:37.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:37.337 Initialization complete. Launching workers. 00:29:37.337 ======================================================== 00:29:37.337 Latency(us) 00:29:37.337 Device Information : IOPS MiB/s Average min max 00:29:37.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.33 0.08 916151.35 701.88 1005944.16 00:29:37.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.31 0.08 908063.79 253.03 1042222.86 00:29:37.337 ======================================================== 00:29:37.337 Total : 324.64 0.16 912057.95 253.03 1042222.86 00:29:37.337 00:29:37.337 [2024-11-19 10:56:44.653205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b99a0 (9): Bad file descriptor 00:29:37.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:37.337 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.338 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:37.338 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1870158 00:29:37.338 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1870158 00:29:37.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1870158) - No such process 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1870158 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1870158 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1870158 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:37.906 [2024-11-19 10:56:45.183265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:37.906 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.907 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1870798 00:29:37.907 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:37.907 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:37.907 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1870798 00:29:37.907 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:37.907 [2024-11-19 10:56:45.266093] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:38.475 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.475 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1870798 00:29:38.475 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:39.043 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:39.043 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1870798 00:29:39.043 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:39.302 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:39.302 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1870798 00:29:39.302 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:39.870 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:39.870 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1870798 00:29:39.870 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:40.439 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:40.439 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1870798 00:29:40.439 10:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:41.037 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:41.037 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1870798 00:29:41.037 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:41.037 Initializing NVMe Controllers 00:29:41.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.037 Controller IO queue size 128, less than required. 00:29:41.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:41.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:41.037 Initialization complete. Launching workers. 00:29:41.037 ======================================================== 00:29:41.037 Latency(us) 00:29:41.037 Device Information : IOPS MiB/s Average min max 00:29:41.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002260.91 1000137.53 1006358.99 00:29:41.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005852.16 1000384.61 1042767.91 00:29:41.037 ======================================================== 00:29:41.037 Total : 256.00 0.12 1004056.54 1000137.53 1042767.91 00:29:41.037 00:29:41.296 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:41.296 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1870798 00:29:41.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1870798) - No such process 00:29:41.296 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1870798 00:29:41.296 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:41.296 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:41.296 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.296 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:41.296 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.297 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:41.297 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.297 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.297 rmmod nvme_tcp 00:29:41.556 rmmod nvme_fabrics 00:29:41.556 rmmod nvme_keyring 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1870131 ']' 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1870131 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1870131 ']' 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1870131 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1870131 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1870131' 00:29:41.556 killing process with pid 1870131 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1870131 00:29:41.556 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1870131 00:29:41.556 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.556 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.556 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.556 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:41.815 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:41.815 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.815 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.815 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.815 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.815 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.815 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.815 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.722 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.722 00:29:43.722 real 0m16.207s 00:29:43.722 user 0m26.187s 00:29:43.722 sys 0m6.305s 00:29:43.722 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.722 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.722 ************************************ 00:29:43.722 END TEST nvmf_delete_subsystem 00:29:43.722 ************************************ 00:29:43.722 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:43.722 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:43.722 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.722 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:43.722 ************************************ 00:29:43.722 START TEST nvmf_host_management 00:29:43.722 ************************************ 00:29:43.723 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:43.983 * Looking for test storage... 00:29:43.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.983 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:43.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.984 --rc genhtml_branch_coverage=1 00:29:43.984 --rc genhtml_function_coverage=1 00:29:43.984 --rc genhtml_legend=1 00:29:43.984 --rc geninfo_all_blocks=1 00:29:43.984 --rc geninfo_unexecuted_blocks=1 00:29:43.984 00:29:43.984 ' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:43.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.984 --rc genhtml_branch_coverage=1 00:29:43.984 --rc genhtml_function_coverage=1 00:29:43.984 --rc genhtml_legend=1 00:29:43.984 --rc geninfo_all_blocks=1 00:29:43.984 --rc geninfo_unexecuted_blocks=1 00:29:43.984 00:29:43.984 ' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:43.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.984 --rc genhtml_branch_coverage=1 00:29:43.984 --rc genhtml_function_coverage=1 00:29:43.984 --rc genhtml_legend=1 00:29:43.984 --rc geninfo_all_blocks=1 00:29:43.984 --rc geninfo_unexecuted_blocks=1 00:29:43.984 00:29:43.984 ' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:43.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.984 --rc genhtml_branch_coverage=1 00:29:43.984 --rc genhtml_function_coverage=1 00:29:43.984 --rc genhtml_legend=1 00:29:43.984 --rc geninfo_all_blocks=1 00:29:43.984 --rc geninfo_unexecuted_blocks=1 00:29:43.984 00:29:43.984 ' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.984 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:43.985 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:43.985 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.985 10:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:50.561 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.561 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:50.562 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:50.562 Found net devices under 0000:86:00.0: cvl_0_0 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:50.562 Found net devices under 0000:86:00.1: cvl_0_1 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.562 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:29:50.562 00:29:50.562 --- 10.0.0.2 ping statistics --- 00:29:50.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.562 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:50.562 00:29:50.562 --- 10.0.0.1 ping statistics --- 00:29:50.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.562 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1874849 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1874849 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1874849 ']' 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.562 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.562 [2024-11-19 10:56:57.301185] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:50.562 [2024-11-19 10:56:57.302175] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:50.563 [2024-11-19 10:56:57.302212] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.563 [2024-11-19 10:56:57.384123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.563 [2024-11-19 10:56:57.427563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.563 [2024-11-19 10:56:57.427601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.563 [2024-11-19 10:56:57.427609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.563 [2024-11-19 10:56:57.427614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.563 [2024-11-19 10:56:57.427619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.563 [2024-11-19 10:56:57.429087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.563 [2024-11-19 10:56:57.429174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.563 [2024-11-19 10:56:57.429306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.563 [2024-11-19 10:56:57.429307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:50.563 [2024-11-19 10:56:57.496551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:50.563 [2024-11-19 10:56:57.497167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:50.563 [2024-11-19 10:56:57.497749] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:50.563 [2024-11-19 10:56:57.497916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:50.563 [2024-11-19 10:56:57.498036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 [2024-11-19 10:56:57.573994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 Malloc0 00:29:50.563 [2024-11-19 10:56:57.666256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1874897 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1874897 /var/tmp/bdevperf.sock 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1874897 ']' 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:50.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.563 { 00:29:50.563 "params": { 00:29:50.563 "name": "Nvme$subsystem", 00:29:50.563 "trtype": "$TEST_TRANSPORT", 00:29:50.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.563 "adrfam": "ipv4", 00:29:50.563 "trsvcid": "$NVMF_PORT", 00:29:50.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.563 "hdgst": ${hdgst:-false}, 00:29:50.563 "ddgst": ${ddgst:-false} 00:29:50.563 }, 00:29:50.563 "method": "bdev_nvme_attach_controller" 00:29:50.563 } 00:29:50.563 EOF 00:29:50.563 )") 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:50.563 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.563 "params": { 00:29:50.563 "name": "Nvme0", 00:29:50.563 "trtype": "tcp", 00:29:50.563 "traddr": "10.0.0.2", 00:29:50.563 "adrfam": "ipv4", 00:29:50.563 "trsvcid": "4420", 00:29:50.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.563 "hdgst": false, 00:29:50.563 "ddgst": false 00:29:50.563 }, 00:29:50.563 "method": "bdev_nvme_attach_controller" 00:29:50.563 }' 00:29:50.563 [2024-11-19 10:56:57.763918] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:50.563 [2024-11-19 10:56:57.763973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1874897 ] 00:29:50.563 [2024-11-19 10:56:57.842136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.563 [2024-11-19 10:56:57.883492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.844 Running I/O for 10 seconds... 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=94 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 94 -ge 100 ']' 00:29:50.844 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.155 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:51.155 [2024-11-19 10:56:58.486133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.155 [2024-11-19 10:56:58.486174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.155 [2024-11-19 10:56:58.486189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.155 [2024-11-19 10:56:58.486197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.155 [2024-11-19 10:56:58.486206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.155 [2024-11-19 10:56:58.486213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.155 [2024-11-19 10:56:58.486222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.155 [2024-11-19 10:56:58.486229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.155 [2024-11-19 10:56:58.486237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.155 [2024-11-19 10:56:58.486243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.156 [2024-11-19 10:56:58.486821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.156 [2024-11-19 10:56:58.486829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.486992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.486999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.487148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.157 [2024-11-19 10:56:58.487155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.488112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:51.157 task offset: 99072 on job bdev=Nvme0n1 fails 00:29:51.157 00:29:51.157 Latency(us) 00:29:51.157 [2024-11-19T09:56:58.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.157 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.157 Job: Nvme0n1 ended in about 0.41 seconds with error 00:29:51.157 Verification LBA range: start 0x0 length 0x400 00:29:51.157 Nvme0n1 : 0.41 1885.79 117.86 157.15 0.00 30480.86 1481.68 27468.13 00:29:51.157 [2024-11-19T09:56:58.606Z] =================================================================================================================== 00:29:51.157 [2024-11-19T09:56:58.606Z] Total : 1885.79 117.86 157.15 0.00 30480.86 1481.68 27468.13 00:29:51.157 [2024-11-19 10:56:58.490516] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:51.157 [2024-11-19 10:56:58.490538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055500 (9): Bad file descriptor 00:29:51.157 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.157 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:51.157 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.157 [2024-11-19 10:56:58.491556] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:51.157 [2024-11-19 10:56:58.491628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:51.157 [2024-11-19 10:56:58.491651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.157 [2024-11-19 10:56:58.491666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:51.157 [2024-11-19 10:56:58.491674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:51.157 [2024-11-19 10:56:58.491681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.157 [2024-11-19 10:56:58.491688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2055500 00:29:51.157 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:51.157 [2024-11-19 10:56:58.491706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055500 (9): Bad file descriptor 00:29:51.157 [2024-11-19 10:56:58.491718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:51.157 [2024-11-19 10:56:58.491725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:51.157 [2024-11-19 10:56:58.491734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:51.157 [2024-11-19 10:56:58.491746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:51.157 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.157 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1874897 00:29:52.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1874897) - No such process 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:52.095 { 00:29:52.095 "params": { 00:29:52.095 "name": "Nvme$subsystem", 00:29:52.095 "trtype": "$TEST_TRANSPORT", 00:29:52.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.095 "adrfam": "ipv4", 00:29:52.095 "trsvcid": "$NVMF_PORT", 00:29:52.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.095 "hdgst": ${hdgst:-false}, 00:29:52.095 "ddgst": ${ddgst:-false} 00:29:52.095 }, 00:29:52.095 "method": "bdev_nvme_attach_controller" 00:29:52.095 } 00:29:52.095 EOF 00:29:52.095 )") 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:52.095 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:52.095 "params": { 00:29:52.095 "name": "Nvme0", 00:29:52.095 "trtype": "tcp", 00:29:52.095 "traddr": "10.0.0.2", 00:29:52.095 "adrfam": "ipv4", 00:29:52.095 "trsvcid": "4420", 00:29:52.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.095 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:52.095 "hdgst": false, 00:29:52.095 "ddgst": false 00:29:52.095 }, 00:29:52.095 "method": "bdev_nvme_attach_controller" 00:29:52.095 }' 00:29:52.354 [2024-11-19 10:56:59.559728] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:29:52.354 [2024-11-19 10:56:59.559776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875174 ] 00:29:52.354 [2024-11-19 10:56:59.633559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.354 [2024-11-19 10:56:59.673904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.613 Running I/O for 1 seconds... 00:29:53.551 1984.00 IOPS, 124.00 MiB/s 00:29:53.551 Latency(us) 00:29:53.551 [2024-11-19T09:57:01.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.551 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.551 Verification LBA range: start 0x0 length 0x400 00:29:53.551 Nvme0n1 : 1.03 1993.16 124.57 0.00 0.00 31606.05 5670.29 27924.03 00:29:53.551 [2024-11-19T09:57:01.000Z] =================================================================================================================== 00:29:53.551 [2024-11-19T09:57:01.000Z] Total : 1993.16 124.57 0.00 0.00 31606.05 5670.29 27924.03 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.811 rmmod nvme_tcp 00:29:53.811 rmmod nvme_fabrics 00:29:53.811 rmmod nvme_keyring 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1874849 ']' 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1874849 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1874849 ']' 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1874849 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1874849 00:29:53.811 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.812 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.812 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1874849' 00:29:53.812 killing process with pid 1874849 00:29:53.812 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1874849 00:29:53.812 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1874849 00:29:54.071 [2024-11-19 10:57:01.311867] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.071 10:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.976 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.976 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:55.976 00:29:55.976 real 0m12.259s 00:29:55.976 user 0m17.537s 00:29:55.976 sys 0m6.319s 00:29:55.976 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.976 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:55.976 ************************************ 00:29:55.976 END TEST nvmf_host_management 00:29:55.976 ************************************ 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:56.236 ************************************ 00:29:56.236 START TEST nvmf_lvol 00:29:56.236 ************************************ 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:56.236 * Looking for test storage... 00:29:56.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:56.236 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.237 --rc genhtml_branch_coverage=1 00:29:56.237 --rc genhtml_function_coverage=1 00:29:56.237 --rc genhtml_legend=1 00:29:56.237 --rc geninfo_all_blocks=1 00:29:56.237 --rc geninfo_unexecuted_blocks=1 00:29:56.237 00:29:56.237 ' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.237 --rc genhtml_branch_coverage=1 00:29:56.237 --rc genhtml_function_coverage=1 00:29:56.237 --rc genhtml_legend=1 00:29:56.237 --rc geninfo_all_blocks=1 00:29:56.237 --rc geninfo_unexecuted_blocks=1 00:29:56.237 00:29:56.237 ' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.237 --rc genhtml_branch_coverage=1 00:29:56.237 --rc genhtml_function_coverage=1 00:29:56.237 --rc genhtml_legend=1 00:29:56.237 --rc geninfo_all_blocks=1 00:29:56.237 --rc geninfo_unexecuted_blocks=1 00:29:56.237 00:29:56.237 ' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.237 --rc genhtml_branch_coverage=1 00:29:56.237 --rc genhtml_function_coverage=1 00:29:56.237 --rc genhtml_legend=1 00:29:56.237 --rc geninfo_all_blocks=1 00:29:56.237 --rc geninfo_unexecuted_blocks=1 00:29:56.237 00:29:56.237 ' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:56.237 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:56.496 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:56.497 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.497 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.497 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.497 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:56.497 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:56.497 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:56.497 10:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:03.068 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:03.068 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:03.068 Found net devices under 0000:86:00.0: cvl_0_0 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:03.068 Found net devices under 0000:86:00.1: cvl_0_1 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.068 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:30:03.069 00:30:03.069 --- 10.0.0.2 ping statistics --- 00:30:03.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.069 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:30:03.069 00:30:03.069 --- 10.0.0.1 ping statistics --- 00:30:03.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.069 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1879422 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1879422 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1879422 ']' 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:03.069 [2024-11-19 10:57:09.655175] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:03.069 [2024-11-19 10:57:09.656130] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:03.069 [2024-11-19 10:57:09.656164] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.069 [2024-11-19 10:57:09.737275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:03.069 [2024-11-19 10:57:09.779593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.069 [2024-11-19 10:57:09.779631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.069 [2024-11-19 10:57:09.779639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.069 [2024-11-19 10:57:09.779645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.069 [2024-11-19 10:57:09.779650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.069 [2024-11-19 10:57:09.780930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.069 [2024-11-19 10:57:09.781041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.069 [2024-11-19 10:57:09.781042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.069 [2024-11-19 10:57:09.848121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:03.069 [2024-11-19 10:57:09.848953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:03.069 [2024-11-19 10:57:09.849133] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:03.069 [2024-11-19 10:57:09.849278] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.069 10:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:03.069 [2024-11-19 10:57:10.093816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.069 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:03.069 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:03.069 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:03.329 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:03.329 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:03.329 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:03.588 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4a5e1312-dbee-42b4-b6a7-9ce70d51d84a 00:30:03.588 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4a5e1312-dbee-42b4-b6a7-9ce70d51d84a lvol 20 00:30:03.848 10:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=588ee207-1355-428a-bba4-5de531ff86f4 00:30:03.848 10:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:04.107 10:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 588ee207-1355-428a-bba4-5de531ff86f4 00:30:04.107 10:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:04.365 [2024-11-19 10:57:11.693743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.365 10:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:04.623 10:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1879907 00:30:04.623 10:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:04.623 10:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:05.556 10:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 588ee207-1355-428a-bba4-5de531ff86f4 MY_SNAPSHOT 00:30:05.815 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ad47e196-d2f3-4ce6-b8ea-431105eab98a 00:30:05.815 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 588ee207-1355-428a-bba4-5de531ff86f4 30 00:30:06.072 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ad47e196-d2f3-4ce6-b8ea-431105eab98a MY_CLONE 00:30:06.331 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d6bafe5f-e239-4d1a-9d73-2dc79f46ebd7 00:30:06.331 10:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d6bafe5f-e239-4d1a-9d73-2dc79f46ebd7 00:30:06.899 10:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1879907 00:30:15.018 Initializing NVMe Controllers 00:30:15.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:15.018 Controller IO queue size 128, less than required. 00:30:15.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:15.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:15.018 Initialization complete. Launching workers. 00:30:15.018 ======================================================== 00:30:15.018 Latency(us) 00:30:15.018 Device Information : IOPS MiB/s Average min max 00:30:15.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12335.80 48.19 10378.68 1546.75 66053.04 00:30:15.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12200.00 47.66 10493.79 4412.90 58784.98 00:30:15.018 ======================================================== 00:30:15.018 Total : 24535.80 95.84 10435.92 1546.75 66053.04 00:30:15.018 00:30:15.018 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:15.277 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 588ee207-1355-428a-bba4-5de531ff86f4 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a5e1312-dbee-42b4-b6a7-9ce70d51d84a 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.536 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:15.536 rmmod nvme_tcp 00:30:15.536 rmmod nvme_fabrics 00:30:15.536 rmmod nvme_keyring 00:30:15.795 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.795 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:15.795 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:15.795 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1879422 ']' 00:30:15.795 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1879422 00:30:15.795 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1879422 ']' 00:30:15.795 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1879422 00:30:15.795 10:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:15.795 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.795 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1879422 00:30:15.795 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:15.795 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:15.795 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1879422' 00:30:15.795 killing process with pid 1879422 00:30:15.795 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1879422 00:30:15.795 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1879422 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.055 10:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.964 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.964 00:30:17.964 real 0m21.841s 00:30:17.964 user 0m55.472s 00:30:17.964 sys 0m10.099s 00:30:17.964 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.964 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:17.964 ************************************ 00:30:17.964 END TEST nvmf_lvol 00:30:17.964 ************************************ 00:30:17.964 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:17.964 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:17.964 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.964 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:17.964 ************************************ 00:30:17.964 START TEST nvmf_lvs_grow 00:30:17.964 ************************************ 00:30:17.964 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:18.225 * Looking for test storage... 00:30:18.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:18.225 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:18.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.226 --rc genhtml_branch_coverage=1 00:30:18.226 --rc genhtml_function_coverage=1 00:30:18.226 --rc genhtml_legend=1 00:30:18.226 --rc geninfo_all_blocks=1 00:30:18.226 --rc geninfo_unexecuted_blocks=1 00:30:18.226 00:30:18.226 ' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:18.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.226 --rc genhtml_branch_coverage=1 00:30:18.226 --rc genhtml_function_coverage=1 00:30:18.226 --rc genhtml_legend=1 00:30:18.226 --rc geninfo_all_blocks=1 00:30:18.226 --rc geninfo_unexecuted_blocks=1 00:30:18.226 00:30:18.226 ' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:18.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.226 --rc genhtml_branch_coverage=1 00:30:18.226 --rc genhtml_function_coverage=1 00:30:18.226 --rc genhtml_legend=1 00:30:18.226 --rc geninfo_all_blocks=1 00:30:18.226 --rc geninfo_unexecuted_blocks=1 00:30:18.226 00:30:18.226 ' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:18.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.226 --rc genhtml_branch_coverage=1 00:30:18.226 --rc genhtml_function_coverage=1 00:30:18.226 --rc genhtml_legend=1 00:30:18.226 --rc geninfo_all_blocks=1 00:30:18.226 --rc geninfo_unexecuted_blocks=1 00:30:18.226 00:30:18.226 ' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.226 10:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.814 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:24.815 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:24.815 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:24.815 Found net devices under 0000:86:00.0: cvl_0_0 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:24.815 Found net devices under 0000:86:00.1: cvl_0_1 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:24.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:30:24.815 00:30:24.815 --- 10.0.0.2 ping statistics --- 00:30:24.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.815 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:24.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:30:24.815 00:30:24.815 --- 10.0.0.1 ping statistics --- 00:30:24.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.815 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1885047 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1885047 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:24.815 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1885047 ']' 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:24.816 [2024-11-19 10:57:31.538528] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:24.816 [2024-11-19 10:57:31.539479] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:24.816 [2024-11-19 10:57:31.539512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.816 [2024-11-19 10:57:31.619302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.816 [2024-11-19 10:57:31.660742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.816 [2024-11-19 10:57:31.660780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.816 [2024-11-19 10:57:31.660787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.816 [2024-11-19 10:57:31.660793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.816 [2024-11-19 10:57:31.660798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.816 [2024-11-19 10:57:31.661383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.816 [2024-11-19 10:57:31.728145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:24.816 [2024-11-19 10:57:31.728374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:24.816 [2024-11-19 10:57:31.966043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.816 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:24.816 ************************************ 00:30:24.816 START TEST lvs_grow_clean 00:30:24.816 ************************************ 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:24.816 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:25.075 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:25.075 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:25.075 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:25.335 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:25.335 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:25.335 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b1b93c7-0e37-4398-9deb-72efc41724ec lvol 150 00:30:25.594 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=83525c99-f561-4fdb-8320-0c9b8ffa8dc8 00:30:25.594 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:25.594 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:25.594 [2024-11-19 10:57:33.041759] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:25.594 [2024-11-19 10:57:33.041886] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:25.853 true 00:30:25.853 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:25.853 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:25.853 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:25.853 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:26.112 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 83525c99-f561-4fdb-8320-0c9b8ffa8dc8 00:30:26.371 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:26.630 [2024-11-19 10:57:33.826241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.630 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1885545 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1885545 /var/tmp/bdevperf.sock 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1885545 ']' 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.630 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:26.630 [2024-11-19 10:57:34.064589] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:26.630 [2024-11-19 10:57:34.064638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885545 ] 00:30:26.890 [2024-11-19 10:57:34.139017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.890 [2024-11-19 10:57:34.181784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.890 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.890 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:26.890 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:27.149 Nvme0n1 00:30:27.149 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:27.408 [ 00:30:27.408 { 00:30:27.408 "name": "Nvme0n1", 00:30:27.408 "aliases": [ 00:30:27.408 "83525c99-f561-4fdb-8320-0c9b8ffa8dc8" 00:30:27.408 ], 00:30:27.408 "product_name": "NVMe disk", 00:30:27.408 "block_size": 4096, 00:30:27.408 "num_blocks": 38912, 00:30:27.408 "uuid": "83525c99-f561-4fdb-8320-0c9b8ffa8dc8", 00:30:27.409 "numa_id": 1, 00:30:27.409 "assigned_rate_limits": { 00:30:27.409 "rw_ios_per_sec": 0, 00:30:27.409 "rw_mbytes_per_sec": 0, 00:30:27.409 "r_mbytes_per_sec": 0, 00:30:27.409 "w_mbytes_per_sec": 0 00:30:27.409 }, 00:30:27.409 "claimed": false, 00:30:27.409 "zoned": false, 00:30:27.409 "supported_io_types": { 00:30:27.409 "read": true, 00:30:27.409 "write": true, 00:30:27.409 "unmap": true, 00:30:27.409 "flush": true, 00:30:27.409 "reset": true, 00:30:27.409 "nvme_admin": true, 00:30:27.409 "nvme_io": true, 00:30:27.409 "nvme_io_md": false, 00:30:27.409 "write_zeroes": true, 00:30:27.409 "zcopy": false, 00:30:27.409 "get_zone_info": false, 00:30:27.409 "zone_management": false, 00:30:27.409 "zone_append": false, 00:30:27.409 "compare": true, 00:30:27.409 "compare_and_write": true, 00:30:27.409 "abort": true, 00:30:27.409 "seek_hole": false, 00:30:27.409 "seek_data": false, 00:30:27.409 "copy": true, 00:30:27.409 "nvme_iov_md": false 00:30:27.409 }, 00:30:27.409 "memory_domains": [ 00:30:27.409 { 00:30:27.409 "dma_device_id": "system", 00:30:27.409 "dma_device_type": 1 00:30:27.409 } 00:30:27.409 ], 00:30:27.409 "driver_specific": { 00:30:27.409 "nvme": [ 00:30:27.409 { 00:30:27.409 "trid": { 00:30:27.409 "trtype": "TCP", 00:30:27.409 "adrfam": "IPv4", 00:30:27.409 "traddr": "10.0.0.2", 00:30:27.409 "trsvcid": "4420", 00:30:27.409 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:27.409 }, 00:30:27.409 "ctrlr_data": { 00:30:27.409 "cntlid": 1, 00:30:27.409 "vendor_id": "0x8086", 00:30:27.409 "model_number": "SPDK bdev Controller", 00:30:27.409 "serial_number": "SPDK0", 00:30:27.409 "firmware_revision": "25.01", 00:30:27.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.409 "oacs": { 00:30:27.409 "security": 0, 00:30:27.409 "format": 0, 00:30:27.409 "firmware": 0, 00:30:27.409 "ns_manage": 0 00:30:27.409 }, 00:30:27.409 "multi_ctrlr": true, 00:30:27.409 "ana_reporting": false 00:30:27.409 }, 00:30:27.409 "vs": { 00:30:27.409 "nvme_version": "1.3" 00:30:27.409 }, 00:30:27.409 "ns_data": { 00:30:27.409 "id": 1, 00:30:27.409 "can_share": true 00:30:27.409 } 00:30:27.409 } 00:30:27.409 ], 00:30:27.409 "mp_policy": "active_passive" 00:30:27.409 } 00:30:27.409 } 00:30:27.409 ] 00:30:27.409 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1885711 00:30:27.409 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:27.409 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:27.409 Running I/O for 10 seconds... 00:30:28.788 Latency(us) 00:30:28.788 [2024-11-19T09:57:36.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:28.788 Nvme0n1 : 1.00 22035.00 86.07 0.00 0.00 0.00 0.00 0.00 00:30:28.788 [2024-11-19T09:57:36.237Z] =================================================================================================================== 00:30:28.788 [2024-11-19T09:57:36.237Z] Total : 22035.00 86.07 0.00 0.00 0.00 0.00 0.00 00:30:28.788 00:30:29.356 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:29.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.615 Nvme0n1 : 2.00 22386.00 87.45 0.00 0.00 0.00 0.00 0.00 00:30:29.615 [2024-11-19T09:57:37.064Z] =================================================================================================================== 00:30:29.615 [2024-11-19T09:57:37.064Z] Total : 22386.00 87.45 0.00 0.00 0.00 0.00 0.00 00:30:29.615 00:30:29.615 true 00:30:29.615 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:29.615 10:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:29.874 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:29.874 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:29.874 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1885711 00:30:30.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.443 Nvme0n1 : 3.00 22523.00 87.98 0.00 0.00 0.00 0.00 0.00 00:30:30.443 [2024-11-19T09:57:37.892Z] =================================================================================================================== 00:30:30.443 [2024-11-19T09:57:37.892Z] Total : 22523.00 87.98 0.00 0.00 0.00 0.00 0.00 00:30:30.443 00:30:31.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.822 Nvme0n1 : 4.00 22608.25 88.31 0.00 0.00 0.00 0.00 0.00 00:30:31.822 [2024-11-19T09:57:39.271Z] =================================================================================================================== 00:30:31.822 [2024-11-19T09:57:39.271Z] Total : 22608.25 88.31 0.00 0.00 0.00 0.00 0.00 00:30:31.822 00:30:32.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.758 Nvme0n1 : 5.00 22684.00 88.61 0.00 0.00 0.00 0.00 0.00 00:30:32.758 [2024-11-19T09:57:40.207Z] =================================================================================================================== 00:30:32.758 [2024-11-19T09:57:40.207Z] Total : 22684.00 88.61 0.00 0.00 0.00 0.00 0.00 00:30:32.758 00:30:33.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:33.696 Nvme0n1 : 6.00 22671.00 88.56 0.00 0.00 0.00 0.00 0.00 00:30:33.696 [2024-11-19T09:57:41.145Z] =================================================================================================================== 00:30:33.696 [2024-11-19T09:57:41.145Z] Total : 22671.00 88.56 0.00 0.00 0.00 0.00 0.00 00:30:33.696 00:30:34.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.633 Nvme0n1 : 7.00 22716.14 88.73 0.00 0.00 0.00 0.00 0.00 00:30:34.633 [2024-11-19T09:57:42.082Z] =================================================================================================================== 00:30:34.633 [2024-11-19T09:57:42.082Z] Total : 22716.14 88.73 0.00 0.00 0.00 0.00 0.00 00:30:34.633 00:30:35.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.571 Nvme0n1 : 8.00 22750.00 88.87 0.00 0.00 0.00 0.00 0.00 00:30:35.571 [2024-11-19T09:57:43.020Z] =================================================================================================================== 00:30:35.571 [2024-11-19T09:57:43.020Z] Total : 22750.00 88.87 0.00 0.00 0.00 0.00 0.00 00:30:35.571 00:30:36.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.510 Nvme0n1 : 9.00 22776.33 88.97 0.00 0.00 0.00 0.00 0.00 00:30:36.510 [2024-11-19T09:57:43.959Z] =================================================================================================================== 00:30:36.510 [2024-11-19T09:57:43.959Z] Total : 22776.33 88.97 0.00 0.00 0.00 0.00 0.00 00:30:36.510 00:30:37.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:37.450 Nvme0n1 : 10.00 22797.40 89.05 0.00 0.00 0.00 0.00 0.00 00:30:37.450 [2024-11-19T09:57:44.899Z] =================================================================================================================== 00:30:37.450 [2024-11-19T09:57:44.899Z] Total : 22797.40 89.05 0.00 0.00 0.00 0.00 0.00 00:30:37.450 00:30:37.450 00:30:37.450 Latency(us) 00:30:37.450 [2024-11-19T09:57:44.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:37.450 Nvme0n1 : 10.00 22802.49 89.07 0.00 0.00 5610.37 3305.29 29177.77 00:30:37.450 [2024-11-19T09:57:44.899Z] =================================================================================================================== 00:30:37.450 [2024-11-19T09:57:44.899Z] Total : 22802.49 89.07 0.00 0.00 5610.37 3305.29 29177.77 00:30:37.450 { 00:30:37.450 "results": [ 00:30:37.450 { 00:30:37.450 "job": "Nvme0n1", 00:30:37.450 "core_mask": "0x2", 00:30:37.450 "workload": "randwrite", 00:30:37.450 "status": "finished", 00:30:37.450 "queue_depth": 128, 00:30:37.450 "io_size": 4096, 00:30:37.450 "runtime": 10.003382, 00:30:37.450 "iops": 22802.48819849127, 00:30:37.450 "mibps": 89.07221952535653, 00:30:37.450 "io_failed": 0, 00:30:37.450 "io_timeout": 0, 00:30:37.450 "avg_latency_us": 5610.370369380898, 00:30:37.450 "min_latency_us": 3305.2939130434784, 00:30:37.450 "max_latency_us": 29177.76695652174 00:30:37.450 } 00:30:37.450 ], 00:30:37.450 "core_count": 1 00:30:37.450 } 00:30:37.450 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1885545 00:30:37.450 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1885545 ']' 00:30:37.450 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1885545 00:30:37.450 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:37.450 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.450 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1885545 00:30:37.709 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:37.709 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:37.709 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1885545' 00:30:37.709 killing process with pid 1885545 00:30:37.709 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1885545 00:30:37.709 Received shutdown signal, test time was about 10.000000 seconds 00:30:37.709 00:30:37.709 Latency(us) 00:30:37.709 [2024-11-19T09:57:45.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.709 [2024-11-19T09:57:45.158Z] =================================================================================================================== 00:30:37.709 [2024-11-19T09:57:45.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:37.709 10:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1885545 00:30:37.709 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:37.968 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:38.227 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:38.227 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:38.487 [2024-11-19 10:57:45.853825] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:38.487 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:38.746 request: 00:30:38.746 { 00:30:38.746 "uuid": "8b1b93c7-0e37-4398-9deb-72efc41724ec", 00:30:38.746 "method": "bdev_lvol_get_lvstores", 00:30:38.746 "req_id": 1 00:30:38.746 } 00:30:38.746 Got JSON-RPC error response 00:30:38.746 response: 00:30:38.746 { 00:30:38.746 "code": -19, 00:30:38.746 "message": "No such device" 00:30:38.746 } 00:30:38.746 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:38.746 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.746 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.746 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.746 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.006 aio_bdev 00:30:39.006 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 83525c99-f561-4fdb-8320-0c9b8ffa8dc8 00:30:39.006 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=83525c99-f561-4fdb-8320-0c9b8ffa8dc8 00:30:39.006 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:39.006 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:39.006 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:39.006 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:39.006 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:39.266 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 83525c99-f561-4fdb-8320-0c9b8ffa8dc8 -t 2000 00:30:39.266 [ 00:30:39.266 { 00:30:39.266 "name": "83525c99-f561-4fdb-8320-0c9b8ffa8dc8", 00:30:39.266 "aliases": [ 00:30:39.266 "lvs/lvol" 00:30:39.266 ], 00:30:39.266 "product_name": "Logical Volume", 00:30:39.266 "block_size": 4096, 00:30:39.266 "num_blocks": 38912, 00:30:39.266 "uuid": "83525c99-f561-4fdb-8320-0c9b8ffa8dc8", 00:30:39.266 "assigned_rate_limits": { 00:30:39.266 "rw_ios_per_sec": 0, 00:30:39.266 "rw_mbytes_per_sec": 0, 00:30:39.266 "r_mbytes_per_sec": 0, 00:30:39.266 "w_mbytes_per_sec": 0 00:30:39.266 }, 00:30:39.266 "claimed": false, 00:30:39.266 "zoned": false, 00:30:39.266 "supported_io_types": { 00:30:39.266 "read": true, 00:30:39.266 "write": true, 00:30:39.266 "unmap": true, 00:30:39.266 "flush": false, 00:30:39.266 "reset": true, 00:30:39.266 "nvme_admin": false, 00:30:39.266 "nvme_io": false, 00:30:39.266 "nvme_io_md": false, 00:30:39.266 "write_zeroes": true, 00:30:39.266 "zcopy": false, 00:30:39.266 "get_zone_info": false, 00:30:39.266 "zone_management": false, 00:30:39.266 "zone_append": false, 00:30:39.266 "compare": false, 00:30:39.266 "compare_and_write": false, 00:30:39.266 "abort": false, 00:30:39.266 "seek_hole": true, 00:30:39.266 "seek_data": true, 00:30:39.266 "copy": false, 00:30:39.266 "nvme_iov_md": false 00:30:39.266 }, 00:30:39.266 "driver_specific": { 00:30:39.266 "lvol": { 00:30:39.266 "lvol_store_uuid": "8b1b93c7-0e37-4398-9deb-72efc41724ec", 00:30:39.266 "base_bdev": "aio_bdev", 00:30:39.266 "thin_provision": false, 00:30:39.266 "num_allocated_clusters": 38, 00:30:39.266 "snapshot": false, 00:30:39.266 "clone": false, 00:30:39.266 "esnap_clone": false 00:30:39.266 } 00:30:39.266 } 00:30:39.266 } 00:30:39.266 ] 00:30:39.266 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:39.266 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:39.266 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:39.525 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:39.525 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:39.525 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:39.784 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:39.784 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 83525c99-f561-4fdb-8320-0c9b8ffa8dc8 00:30:40.044 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b1b93c7-0e37-4398-9deb-72efc41724ec 00:30:40.305 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:40.305 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:40.305 00:30:40.305 real 0m15.718s 00:30:40.305 user 0m15.226s 00:30:40.305 sys 0m1.500s 00:30:40.305 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.305 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:40.305 ************************************ 00:30:40.305 END TEST lvs_grow_clean 00:30:40.305 ************************************ 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:40.565 ************************************ 00:30:40.565 START TEST lvs_grow_dirty 00:30:40.565 ************************************ 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:40.565 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:40.824 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:40.824 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:40.824 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:40.824 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:40.824 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:41.084 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:41.084 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:41.084 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf lvol 150 00:30:41.343 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6763327-aafe-4ec2-9377-d0142c7aaede 00:30:41.343 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:41.343 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:41.343 [2024-11-19 10:57:48.781763] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:41.343 [2024-11-19 10:57:48.781893] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:41.343 true 00:30:41.602 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:41.602 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:41.602 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:41.602 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:41.866 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6763327-aafe-4ec2-9377-d0142c7aaede 00:30:42.148 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:42.148 [2024-11-19 10:57:49.554195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.148 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1888125 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1888125 /var/tmp/bdevperf.sock 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1888125 ']' 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:42.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.473 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:42.473 [2024-11-19 10:57:49.806833] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:42.473 [2024-11-19 10:57:49.806880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888125 ] 00:30:42.473 [2024-11-19 10:57:49.882033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.731 [2024-11-19 10:57:49.924617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.731 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:42.731 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:42.731 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:42.990 Nvme0n1 00:30:43.249 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:43.249 [ 00:30:43.249 { 00:30:43.249 "name": "Nvme0n1", 00:30:43.249 "aliases": [ 00:30:43.249 "a6763327-aafe-4ec2-9377-d0142c7aaede" 00:30:43.249 ], 00:30:43.249 "product_name": "NVMe disk", 00:30:43.249 "block_size": 4096, 00:30:43.249 "num_blocks": 38912, 00:30:43.249 "uuid": "a6763327-aafe-4ec2-9377-d0142c7aaede", 00:30:43.249 "numa_id": 1, 00:30:43.249 "assigned_rate_limits": { 00:30:43.249 "rw_ios_per_sec": 0, 00:30:43.249 "rw_mbytes_per_sec": 0, 00:30:43.249 "r_mbytes_per_sec": 0, 00:30:43.249 "w_mbytes_per_sec": 0 00:30:43.249 }, 00:30:43.249 "claimed": false, 00:30:43.249 "zoned": false, 00:30:43.249 "supported_io_types": { 00:30:43.249 "read": true, 00:30:43.249 "write": true, 00:30:43.249 "unmap": true, 00:30:43.249 "flush": true, 00:30:43.249 "reset": true, 00:30:43.249 "nvme_admin": true, 00:30:43.249 "nvme_io": true, 00:30:43.249 "nvme_io_md": false, 00:30:43.249 "write_zeroes": true, 00:30:43.249 "zcopy": false, 00:30:43.249 "get_zone_info": false, 00:30:43.249 "zone_management": false, 00:30:43.249 "zone_append": false, 00:30:43.249 "compare": true, 00:30:43.249 "compare_and_write": true, 00:30:43.249 "abort": true, 00:30:43.249 "seek_hole": false, 00:30:43.249 "seek_data": false, 00:30:43.249 "copy": true, 00:30:43.249 "nvme_iov_md": false 00:30:43.249 }, 00:30:43.249 "memory_domains": [ 00:30:43.249 { 00:30:43.249 "dma_device_id": "system", 00:30:43.249 "dma_device_type": 1 00:30:43.249 } 00:30:43.249 ], 00:30:43.249 "driver_specific": { 00:30:43.249 "nvme": [ 00:30:43.249 { 00:30:43.249 "trid": { 00:30:43.249 "trtype": "TCP", 00:30:43.249 "adrfam": "IPv4", 00:30:43.249 "traddr": "10.0.0.2", 00:30:43.249 "trsvcid": "4420", 00:30:43.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:43.249 }, 00:30:43.249 "ctrlr_data": { 00:30:43.249 "cntlid": 1, 00:30:43.249 "vendor_id": "0x8086", 00:30:43.249 "model_number": "SPDK bdev Controller", 00:30:43.249 "serial_number": "SPDK0", 00:30:43.249 "firmware_revision": "25.01", 00:30:43.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.249 "oacs": { 00:30:43.250 "security": 0, 00:30:43.250 "format": 0, 00:30:43.250 "firmware": 0, 00:30:43.250 "ns_manage": 0 00:30:43.250 }, 00:30:43.250 "multi_ctrlr": true, 00:30:43.250 "ana_reporting": false 00:30:43.250 }, 00:30:43.250 "vs": { 00:30:43.250 "nvme_version": "1.3" 00:30:43.250 }, 00:30:43.250 "ns_data": { 00:30:43.250 "id": 1, 00:30:43.250 "can_share": true 00:30:43.250 } 00:30:43.250 } 00:30:43.250 ], 00:30:43.250 "mp_policy": "active_passive" 00:30:43.250 } 00:30:43.250 } 00:30:43.250 ] 00:30:43.250 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:43.250 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1888313 00:30:43.250 10:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:43.508 Running I/O for 10 seconds... 00:30:44.444 Latency(us) 00:30:44.444 [2024-11-19T09:57:51.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.444 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:30:44.444 [2024-11-19T09:57:51.893Z] =================================================================================================================== 00:30:44.444 [2024-11-19T09:57:51.893Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:30:44.444 00:30:45.380 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:45.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.380 Nvme0n1 : 2.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:45.380 [2024-11-19T09:57:52.829Z] =================================================================================================================== 00:30:45.380 [2024-11-19T09:57:52.829Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:45.380 00:30:45.380 true 00:30:45.639 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:45.639 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:45.639 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:45.639 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:45.639 10:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1888313 00:30:46.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.576 Nvme0n1 : 3.00 22436.67 87.64 0.00 0.00 0.00 0.00 0.00 00:30:46.576 [2024-11-19T09:57:54.025Z] =================================================================================================================== 00:30:46.576 [2024-11-19T09:57:54.025Z] Total : 22436.67 87.64 0.00 0.00 0.00 0.00 0.00 00:30:46.576 00:30:47.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.512 Nvme0n1 : 4.00 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:30:47.512 [2024-11-19T09:57:54.961Z] =================================================================================================================== 00:30:47.512 [2024-11-19T09:57:54.961Z] Total : 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:30:47.512 00:30:48.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.448 Nvme0n1 : 5.00 22618.80 88.35 0.00 0.00 0.00 0.00 0.00 00:30:48.448 [2024-11-19T09:57:55.897Z] =================================================================================================================== 00:30:48.448 [2024-11-19T09:57:55.897Z] Total : 22618.80 88.35 0.00 0.00 0.00 0.00 0.00 00:30:48.448 00:30:49.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.385 Nvme0n1 : 6.00 22675.17 88.57 0.00 0.00 0.00 0.00 0.00 00:30:49.385 [2024-11-19T09:57:56.834Z] =================================================================================================================== 00:30:49.385 [2024-11-19T09:57:56.835Z] Total : 22675.17 88.57 0.00 0.00 0.00 0.00 0.00 00:30:49.386 00:30:50.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.322 Nvme0n1 : 7.00 22710.71 88.71 0.00 0.00 0.00 0.00 0.00 00:30:50.322 [2024-11-19T09:57:57.771Z] =================================================================================================================== 00:30:50.322 [2024-11-19T09:57:57.771Z] Total : 22710.71 88.71 0.00 0.00 0.00 0.00 0.00 00:30:50.322 00:30:51.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.700 Nvme0n1 : 8.00 22741.50 88.83 0.00 0.00 0.00 0.00 0.00 00:30:51.700 [2024-11-19T09:57:59.149Z] =================================================================================================================== 00:30:51.700 [2024-11-19T09:57:59.149Z] Total : 22741.50 88.83 0.00 0.00 0.00 0.00 0.00 00:30:51.700 00:30:52.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.632 Nvme0n1 : 9.00 22768.78 88.94 0.00 0.00 0.00 0.00 0.00 00:30:52.632 [2024-11-19T09:58:00.081Z] =================================================================================================================== 00:30:52.632 [2024-11-19T09:58:00.081Z] Total : 22768.78 88.94 0.00 0.00 0.00 0.00 0.00 00:30:52.632 00:30:53.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.564 Nvme0n1 : 10.00 22771.60 88.95 0.00 0.00 0.00 0.00 0.00 00:30:53.564 [2024-11-19T09:58:01.013Z] =================================================================================================================== 00:30:53.564 [2024-11-19T09:58:01.013Z] Total : 22771.60 88.95 0.00 0.00 0.00 0.00 0.00 00:30:53.564 00:30:53.564 00:30:53.564 Latency(us) 00:30:53.564 [2024-11-19T09:58:01.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.564 Nvme0n1 : 10.00 22770.50 88.95 0.00 0.00 5618.22 3305.29 26442.35 00:30:53.564 [2024-11-19T09:58:01.013Z] =================================================================================================================== 00:30:53.564 [2024-11-19T09:58:01.013Z] Total : 22770.50 88.95 0.00 0.00 5618.22 3305.29 26442.35 00:30:53.564 { 00:30:53.564 "results": [ 00:30:53.564 { 00:30:53.564 "job": "Nvme0n1", 00:30:53.564 "core_mask": "0x2", 00:30:53.564 "workload": "randwrite", 00:30:53.564 "status": "finished", 00:30:53.564 "queue_depth": 128, 00:30:53.564 "io_size": 4096, 00:30:53.564 "runtime": 10.003295, 00:30:53.564 "iops": 22770.497121198565, 00:30:53.564 "mibps": 88.9472543796819, 00:30:53.564 "io_failed": 0, 00:30:53.564 "io_timeout": 0, 00:30:53.564 "avg_latency_us": 5618.224131874004, 00:30:53.564 "min_latency_us": 3305.2939130434784, 00:30:53.564 "max_latency_us": 26442.351304347827 00:30:53.564 } 00:30:53.564 ], 00:30:53.564 "core_count": 1 00:30:53.564 } 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1888125 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1888125 ']' 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1888125 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1888125 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1888125' 00:30:53.564 killing process with pid 1888125 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1888125 00:30:53.564 Received shutdown signal, test time was about 10.000000 seconds 00:30:53.564 00:30:53.564 Latency(us) 00:30:53.564 [2024-11-19T09:58:01.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.564 [2024-11-19T09:58:01.013Z] =================================================================================================================== 00:30:53.564 [2024-11-19T09:58:01.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1888125 00:30:53.564 10:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:53.822 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:54.079 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:54.079 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1885047 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1885047 00:30:54.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1885047 Killed "${NVMF_APP[@]}" "$@" 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1889977 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1889977 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1889977 ']' 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.337 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:54.337 [2024-11-19 10:58:01.675543] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:54.337 [2024-11-19 10:58:01.676485] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:30:54.337 [2024-11-19 10:58:01.676521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.337 [2024-11-19 10:58:01.754207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.596 [2024-11-19 10:58:01.796193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.596 [2024-11-19 10:58:01.796225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.596 [2024-11-19 10:58:01.796233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.596 [2024-11-19 10:58:01.796239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.596 [2024-11-19 10:58:01.796244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.596 [2024-11-19 10:58:01.796745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.596 [2024-11-19 10:58:01.864148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:54.596 [2024-11-19 10:58:01.864370] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:54.596 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.596 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:54.596 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.596 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.596 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:54.596 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.596 10:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:54.855 [2024-11-19 10:58:02.106229] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:54.855 [2024-11-19 10:58:02.106425] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:54.855 [2024-11-19 10:58:02.106509] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:54.855 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:54.855 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a6763327-aafe-4ec2-9377-d0142c7aaede 00:30:54.855 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6763327-aafe-4ec2-9377-d0142c7aaede 00:30:54.855 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:54.855 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:54.855 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:54.855 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:54.855 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:55.113 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6763327-aafe-4ec2-9377-d0142c7aaede -t 2000 00:30:55.113 [ 00:30:55.113 { 00:30:55.113 "name": "a6763327-aafe-4ec2-9377-d0142c7aaede", 00:30:55.113 "aliases": [ 00:30:55.113 "lvs/lvol" 00:30:55.113 ], 00:30:55.113 "product_name": "Logical Volume", 00:30:55.113 "block_size": 4096, 00:30:55.113 "num_blocks": 38912, 00:30:55.113 "uuid": "a6763327-aafe-4ec2-9377-d0142c7aaede", 00:30:55.113 "assigned_rate_limits": { 00:30:55.113 "rw_ios_per_sec": 0, 00:30:55.113 "rw_mbytes_per_sec": 0, 00:30:55.113 "r_mbytes_per_sec": 0, 00:30:55.113 "w_mbytes_per_sec": 0 00:30:55.113 }, 00:30:55.113 "claimed": false, 00:30:55.113 "zoned": false, 00:30:55.113 "supported_io_types": { 00:30:55.113 "read": true, 00:30:55.113 "write": true, 00:30:55.113 "unmap": true, 00:30:55.113 "flush": false, 00:30:55.113 "reset": true, 00:30:55.113 "nvme_admin": false, 00:30:55.113 "nvme_io": false, 00:30:55.113 "nvme_io_md": false, 00:30:55.113 "write_zeroes": true, 00:30:55.113 "zcopy": false, 00:30:55.113 "get_zone_info": false, 00:30:55.113 "zone_management": false, 00:30:55.113 "zone_append": false, 00:30:55.113 "compare": false, 00:30:55.113 "compare_and_write": false, 00:30:55.113 "abort": false, 00:30:55.113 "seek_hole": true, 00:30:55.113 "seek_data": true, 00:30:55.113 "copy": false, 00:30:55.113 "nvme_iov_md": false 00:30:55.113 }, 00:30:55.113 "driver_specific": { 00:30:55.113 "lvol": { 00:30:55.113 "lvol_store_uuid": "4ce4c49b-959e-47fc-a02d-0f31ee51c1cf", 00:30:55.113 "base_bdev": "aio_bdev", 00:30:55.113 "thin_provision": false, 00:30:55.113 "num_allocated_clusters": 38, 00:30:55.113 "snapshot": false, 00:30:55.113 "clone": false, 00:30:55.113 "esnap_clone": false 00:30:55.113 } 00:30:55.113 } 00:30:55.113 } 00:30:55.113 ] 00:30:55.113 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:55.113 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:55.113 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:55.372 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:55.372 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:55.372 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:55.631 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:55.631 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:55.890 [2024-11-19 10:58:03.093219] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:55.890 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:55.890 request: 00:30:55.890 { 00:30:55.890 "uuid": "4ce4c49b-959e-47fc-a02d-0f31ee51c1cf", 00:30:55.890 "method": "bdev_lvol_get_lvstores", 00:30:55.890 "req_id": 1 00:30:55.890 } 00:30:55.890 Got JSON-RPC error response 00:30:55.890 response: 00:30:55.890 { 00:30:55.890 "code": -19, 00:30:55.890 "message": "No such device" 00:30:55.890 } 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:56.148 aio_bdev 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6763327-aafe-4ec2-9377-d0142c7aaede 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6763327-aafe-4ec2-9377-d0142c7aaede 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:56.148 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:56.406 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6763327-aafe-4ec2-9377-d0142c7aaede -t 2000 00:30:56.665 [ 00:30:56.665 { 00:30:56.665 "name": "a6763327-aafe-4ec2-9377-d0142c7aaede", 00:30:56.665 "aliases": [ 00:30:56.665 "lvs/lvol" 00:30:56.665 ], 00:30:56.665 "product_name": "Logical Volume", 00:30:56.665 "block_size": 4096, 00:30:56.665 "num_blocks": 38912, 00:30:56.665 "uuid": "a6763327-aafe-4ec2-9377-d0142c7aaede", 00:30:56.665 "assigned_rate_limits": { 00:30:56.665 "rw_ios_per_sec": 0, 00:30:56.665 "rw_mbytes_per_sec": 0, 00:30:56.665 "r_mbytes_per_sec": 0, 00:30:56.665 "w_mbytes_per_sec": 0 00:30:56.665 }, 00:30:56.665 "claimed": false, 00:30:56.665 "zoned": false, 00:30:56.665 "supported_io_types": { 00:30:56.665 "read": true, 00:30:56.665 "write": true, 00:30:56.665 "unmap": true, 00:30:56.665 "flush": false, 00:30:56.665 "reset": true, 00:30:56.665 "nvme_admin": false, 00:30:56.665 "nvme_io": false, 00:30:56.665 "nvme_io_md": false, 00:30:56.665 "write_zeroes": true, 00:30:56.665 "zcopy": false, 00:30:56.665 "get_zone_info": false, 00:30:56.665 "zone_management": false, 00:30:56.665 "zone_append": false, 00:30:56.665 "compare": false, 00:30:56.665 "compare_and_write": false, 00:30:56.665 "abort": false, 00:30:56.665 "seek_hole": true, 00:30:56.665 "seek_data": true, 00:30:56.665 "copy": false, 00:30:56.665 "nvme_iov_md": false 00:30:56.665 }, 00:30:56.665 "driver_specific": { 00:30:56.665 "lvol": { 00:30:56.665 "lvol_store_uuid": "4ce4c49b-959e-47fc-a02d-0f31ee51c1cf", 00:30:56.665 "base_bdev": "aio_bdev", 00:30:56.665 "thin_provision": false, 00:30:56.665 "num_allocated_clusters": 38, 00:30:56.665 "snapshot": false, 00:30:56.665 "clone": false, 00:30:56.665 "esnap_clone": false 00:30:56.665 } 00:30:56.665 } 00:30:56.665 } 00:30:56.665 ] 00:30:56.665 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:56.665 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:56.665 10:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:56.924 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:56.924 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:56.924 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:56.924 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:56.924 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6763327-aafe-4ec2-9377-d0142c7aaede 00:30:57.183 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ce4c49b-959e-47fc-a02d-0f31ee51c1cf 00:30:57.442 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:57.701 00:30:57.701 real 0m17.139s 00:30:57.701 user 0m34.539s 00:30:57.701 sys 0m3.868s 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:57.701 ************************************ 00:30:57.701 END TEST lvs_grow_dirty 00:30:57.701 ************************************ 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:57.701 10:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:57.701 nvmf_trace.0 00:30:57.701 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:57.701 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:57.701 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:57.701 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:57.701 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.701 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:57.701 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.701 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.701 rmmod nvme_tcp 00:30:57.702 rmmod nvme_fabrics 00:30:57.702 rmmod nvme_keyring 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1889977 ']' 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1889977 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1889977 ']' 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1889977 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.702 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1889977 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1889977' 00:30:57.962 killing process with pid 1889977 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1889977 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1889977 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.962 10:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:00.499 00:31:00.499 real 0m42.000s 00:31:00.499 user 0m52.268s 00:31:00.499 sys 0m10.241s 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:00.499 ************************************ 00:31:00.499 END TEST nvmf_lvs_grow 00:31:00.499 ************************************ 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:00.499 ************************************ 00:31:00.499 START TEST nvmf_bdev_io_wait 00:31:00.499 ************************************ 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:00.499 * Looking for test storage... 00:31:00.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:00.499 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:00.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.500 --rc genhtml_branch_coverage=1 00:31:00.500 --rc genhtml_function_coverage=1 00:31:00.500 --rc genhtml_legend=1 00:31:00.500 --rc geninfo_all_blocks=1 00:31:00.500 --rc geninfo_unexecuted_blocks=1 00:31:00.500 00:31:00.500 ' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:00.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.500 --rc genhtml_branch_coverage=1 00:31:00.500 --rc genhtml_function_coverage=1 00:31:00.500 --rc genhtml_legend=1 00:31:00.500 --rc geninfo_all_blocks=1 00:31:00.500 --rc geninfo_unexecuted_blocks=1 00:31:00.500 00:31:00.500 ' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:00.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.500 --rc genhtml_branch_coverage=1 00:31:00.500 --rc genhtml_function_coverage=1 00:31:00.500 --rc genhtml_legend=1 00:31:00.500 --rc geninfo_all_blocks=1 00:31:00.500 --rc geninfo_unexecuted_blocks=1 00:31:00.500 00:31:00.500 ' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:00.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.500 --rc genhtml_branch_coverage=1 00:31:00.500 --rc genhtml_function_coverage=1 00:31:00.500 --rc genhtml_legend=1 00:31:00.500 --rc geninfo_all_blocks=1 00:31:00.500 --rc geninfo_unexecuted_blocks=1 00:31:00.500 00:31:00.500 ' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:00.500 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:00.501 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.501 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:00.501 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.501 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:00.501 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:00.501 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:00.501 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:07.077 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:07.077 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:07.077 Found net devices under 0000:86:00.0: cvl_0_0 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:07.077 Found net devices under 0000:86:00.1: cvl_0_1 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:07.077 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:07.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:31:07.078 00:31:07.078 --- 10.0.0.2 ping statistics --- 00:31:07.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.078 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:31:07.078 00:31:07.078 --- 10.0.0.1 ping statistics --- 00:31:07.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.078 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1894031 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1894031 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1894031 ']' 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.078 [2024-11-19 10:58:13.650751] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:07.078 [2024-11-19 10:58:13.651705] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:07.078 [2024-11-19 10:58:13.651743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.078 [2024-11-19 10:58:13.728993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:07.078 [2024-11-19 10:58:13.772852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.078 [2024-11-19 10:58:13.772890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.078 [2024-11-19 10:58:13.772898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.078 [2024-11-19 10:58:13.772903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.078 [2024-11-19 10:58:13.772908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.078 [2024-11-19 10:58:13.774418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.078 [2024-11-19 10:58:13.774532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.078 [2024-11-19 10:58:13.774638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.078 [2024-11-19 10:58:13.774639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.078 [2024-11-19 10:58:13.774981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.078 [2024-11-19 10:58:13.916115] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:07.078 [2024-11-19 10:58:13.916825] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:07.078 [2024-11-19 10:58:13.917076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:07.078 [2024-11-19 10:58:13.917175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.078 [2024-11-19 10:58:13.927472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.078 Malloc0 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.078 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.079 10:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.079 [2024-11-19 10:58:13.999563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1894260 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1894262 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:07.079 { 00:31:07.079 "params": { 00:31:07.079 "name": "Nvme$subsystem", 00:31:07.079 "trtype": "$TEST_TRANSPORT", 00:31:07.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.079 "adrfam": "ipv4", 00:31:07.079 "trsvcid": "$NVMF_PORT", 00:31:07.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.079 "hdgst": ${hdgst:-false}, 00:31:07.079 "ddgst": ${ddgst:-false} 00:31:07.079 }, 00:31:07.079 "method": "bdev_nvme_attach_controller" 00:31:07.079 } 00:31:07.079 EOF 00:31:07.079 )") 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1894264 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:07.079 { 00:31:07.079 "params": { 00:31:07.079 "name": "Nvme$subsystem", 00:31:07.079 "trtype": "$TEST_TRANSPORT", 00:31:07.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.079 "adrfam": "ipv4", 00:31:07.079 "trsvcid": "$NVMF_PORT", 00:31:07.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.079 "hdgst": ${hdgst:-false}, 00:31:07.079 "ddgst": ${ddgst:-false} 00:31:07.079 }, 00:31:07.079 "method": "bdev_nvme_attach_controller" 00:31:07.079 } 00:31:07.079 EOF 00:31:07.079 )") 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1894267 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:07.079 { 00:31:07.079 "params": { 00:31:07.079 "name": "Nvme$subsystem", 00:31:07.079 "trtype": "$TEST_TRANSPORT", 00:31:07.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.079 "adrfam": "ipv4", 00:31:07.079 "trsvcid": "$NVMF_PORT", 00:31:07.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.079 "hdgst": ${hdgst:-false}, 00:31:07.079 "ddgst": ${ddgst:-false} 00:31:07.079 }, 00:31:07.079 "method": "bdev_nvme_attach_controller" 00:31:07.079 } 00:31:07.079 EOF 00:31:07.079 )") 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:07.079 { 00:31:07.079 "params": { 00:31:07.079 "name": "Nvme$subsystem", 00:31:07.079 "trtype": "$TEST_TRANSPORT", 00:31:07.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.079 "adrfam": "ipv4", 00:31:07.079 "trsvcid": "$NVMF_PORT", 00:31:07.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.079 "hdgst": ${hdgst:-false}, 00:31:07.079 "ddgst": ${ddgst:-false} 00:31:07.079 }, 00:31:07.079 "method": "bdev_nvme_attach_controller" 00:31:07.079 } 00:31:07.079 EOF 00:31:07.079 )") 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1894260 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:07.079 "params": { 00:31:07.079 "name": "Nvme1", 00:31:07.079 "trtype": "tcp", 00:31:07.079 "traddr": "10.0.0.2", 00:31:07.079 "adrfam": "ipv4", 00:31:07.079 "trsvcid": "4420", 00:31:07.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.079 "hdgst": false, 00:31:07.079 "ddgst": false 00:31:07.079 }, 00:31:07.079 "method": "bdev_nvme_attach_controller" 00:31:07.079 }' 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:07.079 "params": { 00:31:07.079 "name": "Nvme1", 00:31:07.079 "trtype": "tcp", 00:31:07.079 "traddr": "10.0.0.2", 00:31:07.079 "adrfam": "ipv4", 00:31:07.079 "trsvcid": "4420", 00:31:07.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.079 "hdgst": false, 00:31:07.079 "ddgst": false 00:31:07.079 }, 00:31:07.079 "method": "bdev_nvme_attach_controller" 00:31:07.079 }' 00:31:07.079 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:07.079 "params": { 00:31:07.079 "name": "Nvme1", 00:31:07.079 "trtype": "tcp", 00:31:07.079 "traddr": "10.0.0.2", 00:31:07.079 "adrfam": "ipv4", 00:31:07.079 "trsvcid": "4420", 00:31:07.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.079 "hdgst": false, 00:31:07.079 "ddgst": false 00:31:07.079 }, 00:31:07.080 "method": "bdev_nvme_attach_controller" 00:31:07.080 }' 00:31:07.080 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:07.080 10:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:07.080 "params": { 00:31:07.080 "name": "Nvme1", 00:31:07.080 "trtype": "tcp", 00:31:07.080 "traddr": "10.0.0.2", 00:31:07.080 "adrfam": "ipv4", 00:31:07.080 "trsvcid": "4420", 00:31:07.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.080 "hdgst": false, 00:31:07.080 "ddgst": false 00:31:07.080 }, 00:31:07.080 "method": "bdev_nvme_attach_controller" 00:31:07.080 }' 00:31:07.080 [2024-11-19 10:58:14.049228] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:07.080 [2024-11-19 10:58:14.049279] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:07.080 [2024-11-19 10:58:14.053627] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:07.080 [2024-11-19 10:58:14.053670] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:07.080 [2024-11-19 10:58:14.055369] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:07.080 [2024-11-19 10:58:14.055412] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:07.080 [2024-11-19 10:58:14.056427] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:07.080 [2024-11-19 10:58:14.056469] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:07.080 [2024-11-19 10:58:14.234548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.080 [2024-11-19 10:58:14.277582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:07.080 [2024-11-19 10:58:14.318207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.080 [2024-11-19 10:58:14.369143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.080 [2024-11-19 10:58:14.373772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:07.080 [2024-11-19 10:58:14.412024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:07.080 [2024-11-19 10:58:14.427692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.080 [2024-11-19 10:58:14.470690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:07.339 Running I/O for 1 seconds... 00:31:07.339 Running I/O for 1 seconds... 00:31:07.339 Running I/O for 1 seconds... 00:31:07.339 Running I/O for 1 seconds... 00:31:08.275 12992.00 IOPS, 50.75 MiB/s 00:31:08.275 Latency(us) 00:31:08.275 [2024-11-19T09:58:15.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.275 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:08.275 Nvme1n1 : 1.01 13032.25 50.91 0.00 0.00 9785.92 3262.55 11967.44 00:31:08.275 [2024-11-19T09:58:15.725Z] =================================================================================================================== 00:31:08.276 [2024-11-19T09:58:15.725Z] Total : 13032.25 50.91 0.00 0.00 9785.92 3262.55 11967.44 00:31:08.276 246432.00 IOPS, 962.62 MiB/s 00:31:08.276 Latency(us) 00:31:08.276 [2024-11-19T09:58:15.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.276 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:08.276 Nvme1n1 : 1.00 246046.08 961.12 0.00 0.00 517.29 233.29 1538.67 00:31:08.276 [2024-11-19T09:58:15.725Z] =================================================================================================================== 00:31:08.276 [2024-11-19T09:58:15.725Z] Total : 246046.08 961.12 0.00 0.00 517.29 233.29 1538.67 00:31:08.276 11813.00 IOPS, 46.14 MiB/s [2024-11-19T09:58:15.725Z] 10190.00 IOPS, 39.80 MiB/s 00:31:08.276 Latency(us) 00:31:08.276 [2024-11-19T09:58:15.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.276 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:08.276 Nvme1n1 : 1.01 11888.23 46.44 0.00 0.00 10736.32 1980.33 16298.52 00:31:08.276 [2024-11-19T09:58:15.725Z] =================================================================================================================== 00:31:08.276 [2024-11-19T09:58:15.725Z] Total : 11888.23 46.44 0.00 0.00 10736.32 1980.33 16298.52 00:31:08.276 00:31:08.276 Latency(us) 00:31:08.276 [2024-11-19T09:58:15.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.276 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:08.276 Nvme1n1 : 1.01 10270.11 40.12 0.00 0.00 12429.12 1894.85 19147.91 00:31:08.276 [2024-11-19T09:58:15.725Z] =================================================================================================================== 00:31:08.276 [2024-11-19T09:58:15.725Z] Total : 10270.11 40.12 0.00 0.00 12429.12 1894.85 19147.91 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1894262 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1894264 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1894267 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:08.535 rmmod nvme_tcp 00:31:08.535 rmmod nvme_fabrics 00:31:08.535 rmmod nvme_keyring 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1894031 ']' 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1894031 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1894031 ']' 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1894031 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1894031 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1894031' 00:31:08.535 killing process with pid 1894031 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1894031 00:31:08.535 10:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1894031 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.794 10:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:11.333 00:31:11.333 real 0m10.735s 00:31:11.333 user 0m14.931s 00:31:11.333 sys 0m6.485s 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:11.333 ************************************ 00:31:11.333 END TEST nvmf_bdev_io_wait 00:31:11.333 ************************************ 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:11.333 ************************************ 00:31:11.333 START TEST nvmf_queue_depth 00:31:11.333 ************************************ 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:11.333 * Looking for test storage... 00:31:11.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:11.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.333 --rc genhtml_branch_coverage=1 00:31:11.333 --rc genhtml_function_coverage=1 00:31:11.333 --rc genhtml_legend=1 00:31:11.333 --rc geninfo_all_blocks=1 00:31:11.333 --rc geninfo_unexecuted_blocks=1 00:31:11.333 00:31:11.333 ' 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:11.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.333 --rc genhtml_branch_coverage=1 00:31:11.333 --rc genhtml_function_coverage=1 00:31:11.333 --rc genhtml_legend=1 00:31:11.333 --rc geninfo_all_blocks=1 00:31:11.333 --rc geninfo_unexecuted_blocks=1 00:31:11.333 00:31:11.333 ' 00:31:11.333 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:11.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.333 --rc genhtml_branch_coverage=1 00:31:11.333 --rc genhtml_function_coverage=1 00:31:11.333 --rc genhtml_legend=1 00:31:11.333 --rc geninfo_all_blocks=1 00:31:11.333 --rc geninfo_unexecuted_blocks=1 00:31:11.333 00:31:11.333 ' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:11.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.334 --rc genhtml_branch_coverage=1 00:31:11.334 --rc genhtml_function_coverage=1 00:31:11.334 --rc genhtml_legend=1 00:31:11.334 --rc geninfo_all_blocks=1 00:31:11.334 --rc geninfo_unexecuted_blocks=1 00:31:11.334 00:31:11.334 ' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.334 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.608 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.609 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.609 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.609 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:31:16.868 00:31:16.868 --- 10.0.0.2 ping statistics --- 00:31:16.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.868 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:31:16.868 00:31:16.868 --- 10.0.0.1 ping statistics --- 00:31:16.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.868 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:16.868 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1898044 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1898044 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1898044 ']' 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.127 10:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:17.127 [2024-11-19 10:58:24.371314] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.127 [2024-11-19 10:58:24.372239] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:17.127 [2024-11-19 10:58:24.372274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.127 [2024-11-19 10:58:24.454786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.127 [2024-11-19 10:58:24.494436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.127 [2024-11-19 10:58:24.494472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.127 [2024-11-19 10:58:24.494480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.127 [2024-11-19 10:58:24.494486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.127 [2024-11-19 10:58:24.494492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.127 [2024-11-19 10:58:24.495066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.127 [2024-11-19 10:58:24.561925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.127 [2024-11-19 10:58:24.562173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:18.064 [2024-11-19 10:58:25.247744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:18.064 Malloc0 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:18.064 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:18.065 [2024-11-19 10:58:25.315696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1898082 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1898082 /var/tmp/bdevperf.sock 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1898082 ']' 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:18.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:18.065 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:18.065 [2024-11-19 10:58:25.366671] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:18.065 [2024-11-19 10:58:25.366715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898082 ] 00:31:18.065 [2024-11-19 10:58:25.440746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.065 [2024-11-19 10:58:25.483587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.324 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.324 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:18.324 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:18.324 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.324 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:18.324 NVMe0n1 00:31:18.324 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.324 10:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:18.582 Running I/O for 10 seconds... 00:31:20.456 11437.00 IOPS, 44.68 MiB/s [2024-11-19T09:58:28.841Z] 11954.00 IOPS, 46.70 MiB/s [2024-11-19T09:58:30.217Z] 12019.33 IOPS, 46.95 MiB/s [2024-11-19T09:58:31.154Z] 12087.25 IOPS, 47.22 MiB/s [2024-11-19T09:58:32.091Z] 12165.20 IOPS, 47.52 MiB/s [2024-11-19T09:58:33.028Z] 12198.00 IOPS, 47.65 MiB/s [2024-11-19T09:58:33.965Z] 12240.00 IOPS, 47.81 MiB/s [2024-11-19T09:58:34.902Z] 12248.38 IOPS, 47.85 MiB/s [2024-11-19T09:58:36.280Z] 12272.78 IOPS, 47.94 MiB/s [2024-11-19T09:58:36.280Z] 12280.40 IOPS, 47.97 MiB/s 00:31:28.831 Latency(us) 00:31:28.831 [2024-11-19T09:58:36.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.831 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:28.831 Verification LBA range: start 0x0 length 0x4000 00:31:28.831 NVMe0n1 : 10.06 12304.54 48.06 0.00 0.00 82951.42 19603.81 54480.36 00:31:28.831 [2024-11-19T09:58:36.280Z] =================================================================================================================== 00:31:28.831 [2024-11-19T09:58:36.280Z] Total : 12304.54 48.06 0.00 0.00 82951.42 19603.81 54480.36 00:31:28.831 { 00:31:28.831 "results": [ 00:31:28.831 { 00:31:28.831 "job": "NVMe0n1", 00:31:28.831 "core_mask": "0x1", 00:31:28.831 "workload": "verify", 00:31:28.831 "status": "finished", 00:31:28.831 "verify_range": { 00:31:28.831 "start": 0, 00:31:28.831 "length": 16384 00:31:28.831 }, 00:31:28.831 "queue_depth": 1024, 00:31:28.831 "io_size": 4096, 00:31:28.831 "runtime": 10.062955, 00:31:28.831 "iops": 12304.536788647072, 00:31:28.831 "mibps": 48.064596830652626, 00:31:28.831 "io_failed": 0, 00:31:28.831 "io_timeout": 0, 00:31:28.831 "avg_latency_us": 82951.42350199798, 00:31:28.831 "min_latency_us": 19603.812173913044, 00:31:28.831 "max_latency_us": 54480.361739130436 00:31:28.831 } 00:31:28.831 ], 00:31:28.831 "core_count": 1 00:31:28.831 } 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1898082 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1898082 ']' 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1898082 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1898082 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1898082' 00:31:28.831 killing process with pid 1898082 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1898082 00:31:28.831 Received shutdown signal, test time was about 10.000000 seconds 00:31:28.831 00:31:28.831 Latency(us) 00:31:28.831 [2024-11-19T09:58:36.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.831 [2024-11-19T09:58:36.280Z] =================================================================================================================== 00:31:28.831 [2024-11-19T09:58:36.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:28.831 10:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1898082 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:28.831 rmmod nvme_tcp 00:31:28.831 rmmod nvme_fabrics 00:31:28.831 rmmod nvme_keyring 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1898044 ']' 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1898044 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1898044 ']' 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1898044 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1898044 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1898044' 00:31:28.831 killing process with pid 1898044 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1898044 00:31:28.831 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1898044 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.091 10:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.628 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:31.629 00:31:31.629 real 0m20.227s 00:31:31.629 user 0m22.903s 00:31:31.629 sys 0m6.229s 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:31.629 ************************************ 00:31:31.629 END TEST nvmf_queue_depth 00:31:31.629 ************************************ 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:31.629 ************************************ 00:31:31.629 START TEST nvmf_target_multipath 00:31:31.629 ************************************ 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:31.629 * Looking for test storage... 00:31:31.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:31.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.629 --rc genhtml_branch_coverage=1 00:31:31.629 --rc genhtml_function_coverage=1 00:31:31.629 --rc genhtml_legend=1 00:31:31.629 --rc geninfo_all_blocks=1 00:31:31.629 --rc geninfo_unexecuted_blocks=1 00:31:31.629 00:31:31.629 ' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:31.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.629 --rc genhtml_branch_coverage=1 00:31:31.629 --rc genhtml_function_coverage=1 00:31:31.629 --rc genhtml_legend=1 00:31:31.629 --rc geninfo_all_blocks=1 00:31:31.629 --rc geninfo_unexecuted_blocks=1 00:31:31.629 00:31:31.629 ' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:31.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.629 --rc genhtml_branch_coverage=1 00:31:31.629 --rc genhtml_function_coverage=1 00:31:31.629 --rc genhtml_legend=1 00:31:31.629 --rc geninfo_all_blocks=1 00:31:31.629 --rc geninfo_unexecuted_blocks=1 00:31:31.629 00:31:31.629 ' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:31.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.629 --rc genhtml_branch_coverage=1 00:31:31.629 --rc genhtml_function_coverage=1 00:31:31.629 --rc genhtml_legend=1 00:31:31.629 --rc geninfo_all_blocks=1 00:31:31.629 --rc geninfo_unexecuted_blocks=1 00:31:31.629 00:31:31.629 ' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.629 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:31.630 10:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.043 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:37.044 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:37.044 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:37.044 Found net devices under 0000:86:00.0: cvl_0_0 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:37.044 Found net devices under 0000:86:00.1: cvl_0_1 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.044 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:37.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:31:37.313 00:31:37.313 --- 10.0.0.2 ping statistics --- 00:31:37.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.313 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:31:37.313 00:31:37.313 --- 10.0.0.1 ping statistics --- 00:31:37.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.313 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:37.313 only one NIC for nvmf test 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.313 rmmod nvme_tcp 00:31:37.313 rmmod nvme_fabrics 00:31:37.313 rmmod nvme_keyring 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.313 10:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.856 00:31:39.856 real 0m8.223s 00:31:39.856 user 0m1.764s 00:31:39.856 sys 0m4.483s 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:39.856 ************************************ 00:31:39.856 END TEST nvmf_target_multipath 00:31:39.856 ************************************ 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:39.856 ************************************ 00:31:39.856 START TEST nvmf_zcopy 00:31:39.856 ************************************ 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:39.856 * Looking for test storage... 00:31:39.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:39.856 10:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:39.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.856 --rc genhtml_branch_coverage=1 00:31:39.856 --rc genhtml_function_coverage=1 00:31:39.856 --rc genhtml_legend=1 00:31:39.856 --rc geninfo_all_blocks=1 00:31:39.856 --rc geninfo_unexecuted_blocks=1 00:31:39.856 00:31:39.856 ' 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:39.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.856 --rc genhtml_branch_coverage=1 00:31:39.856 --rc genhtml_function_coverage=1 00:31:39.856 --rc genhtml_legend=1 00:31:39.856 --rc geninfo_all_blocks=1 00:31:39.856 --rc geninfo_unexecuted_blocks=1 00:31:39.856 00:31:39.856 ' 00:31:39.856 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:39.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.857 --rc genhtml_branch_coverage=1 00:31:39.857 --rc genhtml_function_coverage=1 00:31:39.857 --rc genhtml_legend=1 00:31:39.857 --rc geninfo_all_blocks=1 00:31:39.857 --rc geninfo_unexecuted_blocks=1 00:31:39.857 00:31:39.857 ' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:39.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.857 --rc genhtml_branch_coverage=1 00:31:39.857 --rc genhtml_function_coverage=1 00:31:39.857 --rc genhtml_legend=1 00:31:39.857 --rc geninfo_all_blocks=1 00:31:39.857 --rc geninfo_unexecuted_blocks=1 00:31:39.857 00:31:39.857 ' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.857 10:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.431 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:46.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:46.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:46.432 Found net devices under 0000:86:00.0: cvl_0_0 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:46.432 Found net devices under 0000:86:00.1: cvl_0_1 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:31:46.432 00:31:46.432 --- 10.0.0.2 ping statistics --- 00:31:46.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.432 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:31:46.432 00:31:46.432 --- 10.0.0.1 ping statistics --- 00:31:46.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.432 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1906727 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1906727 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1906727 ']' 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.432 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.433 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.433 10:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.433 [2024-11-19 10:58:53.029629] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:46.433 [2024-11-19 10:58:53.030562] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:46.433 [2024-11-19 10:58:53.030595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.433 [2024-11-19 10:58:53.107867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.433 [2024-11-19 10:58:53.148661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.433 [2024-11-19 10:58:53.148698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.433 [2024-11-19 10:58:53.148705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.433 [2024-11-19 10:58:53.148711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.433 [2024-11-19 10:58:53.148717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.433 [2024-11-19 10:58:53.149281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.433 [2024-11-19 10:58:53.216638] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:46.433 [2024-11-19 10:58:53.216859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.433 [2024-11-19 10:58:53.285972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.433 [2024-11-19 10:58:53.314217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.433 malloc0 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:46.433 { 00:31:46.433 "params": { 00:31:46.433 "name": "Nvme$subsystem", 00:31:46.433 "trtype": "$TEST_TRANSPORT", 00:31:46.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.433 "adrfam": "ipv4", 00:31:46.433 "trsvcid": "$NVMF_PORT", 00:31:46.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.433 "hdgst": ${hdgst:-false}, 00:31:46.433 "ddgst": ${ddgst:-false} 00:31:46.433 }, 00:31:46.433 "method": "bdev_nvme_attach_controller" 00:31:46.433 } 00:31:46.433 EOF 00:31:46.433 )") 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:46.433 10:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:46.433 "params": { 00:31:46.433 "name": "Nvme1", 00:31:46.433 "trtype": "tcp", 00:31:46.433 "traddr": "10.0.0.2", 00:31:46.433 "adrfam": "ipv4", 00:31:46.433 "trsvcid": "4420", 00:31:46.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:46.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:46.433 "hdgst": false, 00:31:46.433 "ddgst": false 00:31:46.433 }, 00:31:46.433 "method": "bdev_nvme_attach_controller" 00:31:46.433 }' 00:31:46.433 [2024-11-19 10:58:53.417616] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:46.433 [2024-11-19 10:58:53.417669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906888 ] 00:31:46.433 [2024-11-19 10:58:53.491844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.433 [2024-11-19 10:58:53.533724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.433 Running I/O for 10 seconds... 00:31:48.308 8326.00 IOPS, 65.05 MiB/s [2024-11-19T09:58:57.136Z] 8371.00 IOPS, 65.40 MiB/s [2024-11-19T09:58:57.704Z] 8397.00 IOPS, 65.60 MiB/s [2024-11-19T09:58:59.082Z] 8412.00 IOPS, 65.72 MiB/s [2024-11-19T09:59:00.018Z] 8409.80 IOPS, 65.70 MiB/s [2024-11-19T09:59:00.956Z] 8417.67 IOPS, 65.76 MiB/s [2024-11-19T09:59:01.889Z] 8399.71 IOPS, 65.62 MiB/s [2024-11-19T09:59:02.824Z] 8402.50 IOPS, 65.64 MiB/s [2024-11-19T09:59:03.758Z] 8404.11 IOPS, 65.66 MiB/s [2024-11-19T09:59:03.758Z] 8405.50 IOPS, 65.67 MiB/s 00:31:56.309 Latency(us) 00:31:56.309 [2024-11-19T09:59:03.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.309 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:56.309 Verification LBA range: start 0x0 length 0x1000 00:31:56.309 Nvme1n1 : 10.01 8409.57 65.70 0.00 0.00 15177.67 2108.55 21883.33 00:31:56.309 [2024-11-19T09:59:03.758Z] =================================================================================================================== 00:31:56.309 [2024-11-19T09:59:03.758Z] Total : 8409.57 65.70 0.00 0.00 15177.67 2108.55 21883.33 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1908564 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.568 { 00:31:56.568 "params": { 00:31:56.568 "name": "Nvme$subsystem", 00:31:56.568 "trtype": "$TEST_TRANSPORT", 00:31:56.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.568 "adrfam": "ipv4", 00:31:56.568 "trsvcid": "$NVMF_PORT", 00:31:56.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.568 "hdgst": ${hdgst:-false}, 00:31:56.568 "ddgst": ${ddgst:-false} 00:31:56.568 }, 00:31:56.568 "method": "bdev_nvme_attach_controller" 00:31:56.568 } 00:31:56.568 EOF 00:31:56.568 )") 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:56.568 [2024-11-19 10:59:03.885629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.885664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:56.568 10:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:56.568 "params": { 00:31:56.568 "name": "Nvme1", 00:31:56.568 "trtype": "tcp", 00:31:56.568 "traddr": "10.0.0.2", 00:31:56.568 "adrfam": "ipv4", 00:31:56.568 "trsvcid": "4420", 00:31:56.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.568 "hdgst": false, 00:31:56.568 "ddgst": false 00:31:56.568 }, 00:31:56.568 "method": "bdev_nvme_attach_controller" 00:31:56.568 }' 00:31:56.568 [2024-11-19 10:59:03.897587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.897602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.909581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.909591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.921582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.921592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.922133] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:31:56.568 [2024-11-19 10:59:03.922177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908564 ] 00:31:56.568 [2024-11-19 10:59:03.933582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.933593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.945580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.945591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.957583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.957594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.969582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.969592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.981587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.981600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.993584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:03.993600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.568 [2024-11-19 10:59:03.995430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.568 [2024-11-19 10:59:04.005583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.568 [2024-11-19 10:59:04.005596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.827 [2024-11-19 10:59:04.017582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.017595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.029580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.029590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.037734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.828 [2024-11-19 10:59:04.041578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.041590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.053594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.053611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.065588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.065603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.077583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.077596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.089600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.089618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.101586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.101598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.113579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.113589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.125591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.125611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.137591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.137605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.149589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.149603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.161583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.161593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.173579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.173588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.185587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.185602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.197587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.197602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.209582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.209598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.221583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.221594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.233580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.233590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.245582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.245597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.257579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.257589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.828 [2024-11-19 10:59:04.269581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.828 [2024-11-19 10:59:04.269591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.281582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.281594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.293581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.293594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.305581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.305592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.317579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.317590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.329580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.329591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.376184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.376202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.385584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.385598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 Running I/O for 5 seconds... 00:31:57.087 [2024-11-19 10:59:04.401872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.401892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.413536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.413557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.427255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.427274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.442311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.442330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.457281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.457307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.468928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.468953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.483352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.483372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.498460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.498479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.513716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.513734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.087 [2024-11-19 10:59:04.527079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.087 [2024-11-19 10:59:04.527098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.541952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.541971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.554851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.554870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.569757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.569775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.581087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.581107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.595094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.595112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.610033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.610053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.621104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.621124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.635327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.635348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.650045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.650064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.664937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.664963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.676146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.676167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.691486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.691506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.706672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.706693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.721822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.721842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.732368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.732387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.747366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.747385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.762258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.762276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.777053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.777073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.346 [2024-11-19 10:59:04.788728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.346 [2024-11-19 10:59:04.788748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.803317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.803337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.818362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.818380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.833110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.833129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.847401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.847420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.862361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.862380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.877344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.877365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.888627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.888647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.903302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.903322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.918597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.918617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.929193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.929213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.943679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.943699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.959014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.959034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.973780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.973800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:04.986595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:04.986615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:05.002432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:05.002452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:05.017271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:05.017291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:05.029931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:05.029957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.605 [2024-11-19 10:59:05.043675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.605 [2024-11-19 10:59:05.043694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.059481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.059501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.074407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.074427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.089971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.089990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.105953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.105972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.118227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.118246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.131210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.131229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.146496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.146515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.162669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.162688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.177597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.177617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.188739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.188758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.203462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.203482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.218383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.218402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.233723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.233742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.245043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.245061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.259726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.259745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.274602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.274626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.289382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.289402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.864 [2024-11-19 10:59:05.300968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.864 [2024-11-19 10:59:05.301003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.315631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.315650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.330487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.330505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.345675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.345694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.358511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.358530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.374007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.374025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.389898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.389917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 16344.00 IOPS, 127.69 MiB/s [2024-11-19T09:59:05.572Z] [2024-11-19 10:59:05.406503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.406522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.421734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.421753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.435438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.435458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.450685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.450703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.465833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.465852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.476474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.476493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.491677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.491695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.506749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.506768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.521659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.521678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.533362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.533382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.547804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.547828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.123 [2024-11-19 10:59:05.562945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.123 [2024-11-19 10:59:05.562969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.573287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.573306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.587833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.587852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.603300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.603321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.617988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.618007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.633254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.633274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.646416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.646436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.661756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.661775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.672932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.672957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.687862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.687891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.703114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.703134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.718353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.718372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.733360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.733378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.744889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.744908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.759409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.759428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.774362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.774381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.789548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.789568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.800685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.800704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.815384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.815407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.382 [2024-11-19 10:59:05.830441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.382 [2024-11-19 10:59:05.830461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.845341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.845360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.859657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.859676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.874690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.874709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.889680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.889700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.901025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.901044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.915882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.915901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.931006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.931025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.945625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.945643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.957020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.957039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.971160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.971180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:05.986675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:05.986695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:06.001825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:06.001845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:06.013172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:06.013190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:06.027243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:06.027260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:06.041771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:06.041790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:06.052377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:06.052395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:06.067183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:06.067213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.641 [2024-11-19 10:59:06.081967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.641 [2024-11-19 10:59:06.081986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.097524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.097545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.110249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.110269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.122919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.122940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.133453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.133474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.147719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.147739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.162427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.162447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.177818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.177837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.189340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.189359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.203888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.203908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.218878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.218897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.234370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.234388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.249827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.249846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.260874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.260894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.275736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.275755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.290635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.290655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.305724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.305744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.317911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.317930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.331310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.331329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.900 [2024-11-19 10:59:06.346536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.900 [2024-11-19 10:59:06.346555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.361287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.361306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.374394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.374413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.385670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.385690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 16420.00 IOPS, 128.28 MiB/s [2024-11-19T09:59:06.609Z] [2024-11-19 10:59:06.399735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.399754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.414934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.414960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.430003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.430022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.445288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.445307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.456884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.456904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.471565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.471584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.486818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.486837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.501817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.501836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.513469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.513488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.527739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.527759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.542960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.542980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.558080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.558099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.570890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.570909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.586388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.586407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.160 [2024-11-19 10:59:06.601526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.160 [2024-11-19 10:59:06.601545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.613034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.613052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.627200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.627218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.642228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.642247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.657271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.657289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.668350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.668369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.683403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.683422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.698253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.698271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.712830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.712850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.725867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.725886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.739573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.739593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.754559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.754577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.419 [2024-11-19 10:59:06.770174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.419 [2024-11-19 10:59:06.770193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.420 [2024-11-19 10:59:06.781504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.420 [2024-11-19 10:59:06.781524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.420 [2024-11-19 10:59:06.795115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.420 [2024-11-19 10:59:06.795134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.420 [2024-11-19 10:59:06.805554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.420 [2024-11-19 10:59:06.805573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.420 [2024-11-19 10:59:06.818936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.420 [2024-11-19 10:59:06.818961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.420 [2024-11-19 10:59:06.829918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.420 [2024-11-19 10:59:06.829937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.420 [2024-11-19 10:59:06.844180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.420 [2024-11-19 10:59:06.844199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.420 [2024-11-19 10:59:06.859115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.420 [2024-11-19 10:59:06.859138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.874227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.874246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.888978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.888997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.902568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.902586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.917453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.917472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.930120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.930138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.943689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.943708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.958687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.958706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.974423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.974442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:06.989505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:06.989525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.001096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.001116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.015065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.015084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.025335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.025354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.039612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.039630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.054392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.054411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.069788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.069807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.083476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.083496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.098404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.098423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.113536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.113556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.678 [2024-11-19 10:59:07.125429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.678 [2024-11-19 10:59:07.125453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.139381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.139400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.154595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.154614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.169343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.169362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.182569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.182589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.194077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.194095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.209591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.209610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.223252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.223271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.238583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.238603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.253872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.253891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.265463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.265482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.279271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.279290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.294395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.294414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.937 [2024-11-19 10:59:07.309210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.937 [2024-11-19 10:59:07.309230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.938 [2024-11-19 10:59:07.320746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.938 [2024-11-19 10:59:07.320766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.938 [2024-11-19 10:59:07.335834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.938 [2024-11-19 10:59:07.335854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.938 [2024-11-19 10:59:07.350782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.938 [2024-11-19 10:59:07.350801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.938 [2024-11-19 10:59:07.366185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.938 [2024-11-19 10:59:07.366204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.938 [2024-11-19 10:59:07.382005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.938 [2024-11-19 10:59:07.382023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.397397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.397421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 16434.33 IOPS, 128.39 MiB/s [2024-11-19T09:59:07.645Z] [2024-11-19 10:59:07.412154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.412173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.427305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.427323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.442440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.442459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.457245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.457263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.471544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.471562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.486665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.486685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.501506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.501527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.515527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.515547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.196 [2024-11-19 10:59:07.530917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.196 [2024-11-19 10:59:07.530937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.197 [2024-11-19 10:59:07.545790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.197 [2024-11-19 10:59:07.545809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.197 [2024-11-19 10:59:07.557084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.197 [2024-11-19 10:59:07.557104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.197 [2024-11-19 10:59:07.571729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.197 [2024-11-19 10:59:07.571749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.197 [2024-11-19 10:59:07.586817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.197 [2024-11-19 10:59:07.586836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.197 [2024-11-19 10:59:07.601714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.197 [2024-11-19 10:59:07.601734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.197 [2024-11-19 10:59:07.613082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.197 [2024-11-19 10:59:07.613102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.197 [2024-11-19 10:59:07.627788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.197 [2024-11-19 10:59:07.627808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.197 [2024-11-19 10:59:07.642523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.197 [2024-11-19 10:59:07.642543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.657618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.657638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.671489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.671508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.686703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.686722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.701863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.701883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.713513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.713534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.728099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.728120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.743196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.743225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.758222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.758242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.773552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.773572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.787338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.787358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.802236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.455 [2024-11-19 10:59:07.802256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.455 [2024-11-19 10:59:07.817292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.456 [2024-11-19 10:59:07.817312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.456 [2024-11-19 10:59:07.828844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.456 [2024-11-19 10:59:07.828864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.456 [2024-11-19 10:59:07.843268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.456 [2024-11-19 10:59:07.843290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.456 [2024-11-19 10:59:07.858637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.456 [2024-11-19 10:59:07.858657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.456 [2024-11-19 10:59:07.873366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.456 [2024-11-19 10:59:07.873386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.456 [2024-11-19 10:59:07.884184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.456 [2024-11-19 10:59:07.884204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.456 [2024-11-19 10:59:07.899699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.456 [2024-11-19 10:59:07.899720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:07.914665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:07.914684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:07.929486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:07.929506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:07.943567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:07.943586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:07.958645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:07.958664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:07.969987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:07.970005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:07.983567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:07.983587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:07.998874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:07.998892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.013986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.014005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.029683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.029702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.042311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.042332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.055615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.055636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.071131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.071151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.086279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.086299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.101785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.101805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.115604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.115622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.130933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.130958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.145882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.145901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.715 [2024-11-19 10:59:08.161654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.715 [2024-11-19 10:59:08.161673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.974 [2024-11-19 10:59:08.175639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.974 [2024-11-19 10:59:08.175657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.974 [2024-11-19 10:59:08.190624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.974 [2024-11-19 10:59:08.190643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.974 [2024-11-19 10:59:08.205126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.974 [2024-11-19 10:59:08.205145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.974 [2024-11-19 10:59:08.218808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.974 [2024-11-19 10:59:08.218827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.974 [2024-11-19 10:59:08.233645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.974 [2024-11-19 10:59:08.233664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.974 [2024-11-19 10:59:08.245050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.974 [2024-11-19 10:59:08.245069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.974 [2024-11-19 10:59:08.259378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.974 [2024-11-19 10:59:08.259397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.974 [2024-11-19 10:59:08.274490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.274509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.289522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.289543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.303386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.303405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.318526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.318544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.333205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.333225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.347593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.347612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.362482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.362500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.377549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.377568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.391748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.391768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 16421.00 IOPS, 128.29 MiB/s [2024-11-19T09:59:08.424Z] [2024-11-19 10:59:08.406791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.406811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.975 [2024-11-19 10:59:08.421696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.975 [2024-11-19 10:59:08.421716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.433178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.433197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.447507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.447526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.462407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.462426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.477684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.477712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.490746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.490766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.506335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.506354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.521590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.521609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.534201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.534220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.546966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.233 [2024-11-19 10:59:08.546984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.233 [2024-11-19 10:59:08.562125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.562144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.234 [2024-11-19 10:59:08.578091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.578111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.234 [2024-11-19 10:59:08.593964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.593999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.234 [2024-11-19 10:59:08.609269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.609288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.234 [2024-11-19 10:59:08.620843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.620861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.234 [2024-11-19 10:59:08.634881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.634899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.234 [2024-11-19 10:59:08.649793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.649811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.234 [2024-11-19 10:59:08.662350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.662369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.234 [2024-11-19 10:59:08.675261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.234 [2024-11-19 10:59:08.675281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.492 [2024-11-19 10:59:08.690563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.492 [2024-11-19 10:59:08.690582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.492 [2024-11-19 10:59:08.705492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.492 [2024-11-19 10:59:08.705512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.492 [2024-11-19 10:59:08.718533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.492 [2024-11-19 10:59:08.718553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.492 [2024-11-19 10:59:08.733275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.492 [2024-11-19 10:59:08.733294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.492 [2024-11-19 10:59:08.747170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.747195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.762267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.762286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.777552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.777571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.788933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.788958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.803536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.803555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.818918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.818938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.833933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.833957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.845219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.845238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.859628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.859647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.874579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.874598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.889382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.889401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.900638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.900657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.915462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.915481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.493 [2024-11-19 10:59:08.930417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.493 [2024-11-19 10:59:08.930436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:08.945598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:08.945618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:08.957403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:08.957422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:08.971543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:08.971563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:08.986758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:08.986779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.001620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.001641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.014431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.014456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.027142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.027161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.042791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.042812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.058081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.058099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.073685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.073711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.087356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.087376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.102600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.102620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.118573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.118592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.133490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.133510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.147271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.147290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.162270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.162289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.177560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.177579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:01.752 [2024-11-19 10:59:09.190169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:01.752 [2024-11-19 10:59:09.190188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.202893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.202913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.214213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.214232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.227755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.227775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.242806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.242826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.257745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.257764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.271835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.271854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.287019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.287043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.301841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.301860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.313174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.313193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.327363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.327382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.341970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.341988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.357074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.357093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.371651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.371670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.386299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.386318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 [2024-11-19 10:59:09.401529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.401548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 16424.80 IOPS, 128.32 MiB/s [2024-11-19T09:59:09.459Z] [2024-11-19 10:59:09.409595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.010 [2024-11-19 10:59:09.409612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.010 00:32:02.010 Latency(us) 00:32:02.010 [2024-11-19T09:59:09.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.011 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:02.011 Nvme1n1 : 5.01 16431.16 128.37 0.00 0.00 7783.80 2008.82 13791.05 00:32:02.011 [2024-11-19T09:59:09.460Z] =================================================================================================================== 00:32:02.011 [2024-11-19T09:59:09.460Z] Total : 16431.16 128.37 0.00 0.00 7783.80 2008.82 13791.05 00:32:02.011 [2024-11-19 10:59:09.421585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.011 [2024-11-19 10:59:09.421602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.011 [2024-11-19 10:59:09.433587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.011 [2024-11-19 10:59:09.433601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.011 [2024-11-19 10:59:09.445597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.011 [2024-11-19 10:59:09.445617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.011 [2024-11-19 10:59:09.457586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.011 [2024-11-19 10:59:09.457599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.469587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.469602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.481583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.481597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.493586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.493603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.505583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.505595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.517581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.517592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.529582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.529591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.541583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.541594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.553581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.553592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 [2024-11-19 10:59:09.565581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:02.269 [2024-11-19 10:59:09.565591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:02.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1908564) - No such process 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1908564 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:02.269 delay0 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.269 10:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:02.527 [2024-11-19 10:59:09.751086] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:09.091 Initializing NVMe Controllers 00:32:09.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:09.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:09.091 Initialization complete. Launching workers. 00:32:09.091 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 14509 00:32:09.091 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14667, failed to submit 107 00:32:09.091 success 14614, unsuccessful 53, failed 0 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.091 rmmod nvme_tcp 00:32:09.091 rmmod nvme_fabrics 00:32:09.091 rmmod nvme_keyring 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1906727 ']' 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1906727 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1906727 ']' 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1906727 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906727 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906727' 00:32:09.091 killing process with pid 1906727 00:32:09.091 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1906727 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1906727 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.092 10:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.998 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.998 00:32:10.998 real 0m31.520s 00:32:10.998 user 0m40.830s 00:32:10.998 sys 0m12.349s 00:32:10.998 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.998 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.998 ************************************ 00:32:10.998 END TEST nvmf_zcopy 00:32:10.998 ************************************ 00:32:10.998 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:10.998 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:10.998 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.998 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:11.258 ************************************ 00:32:11.258 START TEST nvmf_nmic 00:32:11.258 ************************************ 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:11.258 * Looking for test storage... 00:32:11.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:11.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.258 --rc genhtml_branch_coverage=1 00:32:11.258 --rc genhtml_function_coverage=1 00:32:11.258 --rc genhtml_legend=1 00:32:11.258 --rc geninfo_all_blocks=1 00:32:11.258 --rc geninfo_unexecuted_blocks=1 00:32:11.258 00:32:11.258 ' 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:11.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.258 --rc genhtml_branch_coverage=1 00:32:11.258 --rc genhtml_function_coverage=1 00:32:11.258 --rc genhtml_legend=1 00:32:11.258 --rc geninfo_all_blocks=1 00:32:11.258 --rc geninfo_unexecuted_blocks=1 00:32:11.258 00:32:11.258 ' 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:11.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.258 --rc genhtml_branch_coverage=1 00:32:11.258 --rc genhtml_function_coverage=1 00:32:11.258 --rc genhtml_legend=1 00:32:11.258 --rc geninfo_all_blocks=1 00:32:11.258 --rc geninfo_unexecuted_blocks=1 00:32:11.258 00:32:11.258 ' 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:11.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.258 --rc genhtml_branch_coverage=1 00:32:11.258 --rc genhtml_function_coverage=1 00:32:11.258 --rc genhtml_legend=1 00:32:11.258 --rc geninfo_all_blocks=1 00:32:11.258 --rc geninfo_unexecuted_blocks=1 00:32:11.258 00:32:11.258 ' 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.258 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:11.259 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:17.830 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:17.831 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:17.831 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:17.831 Found net devices under 0000:86:00.0: cvl_0_0 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:17.831 Found net devices under 0000:86:00.1: cvl_0_1 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:17.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:32:17.831 00:32:17.831 --- 10.0.0.2 ping statistics --- 00:32:17.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.831 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:32:17.831 00:32:17.831 --- 10.0.0.1 ping statistics --- 00:32:17.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.831 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:17.831 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1913925 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1913925 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1913925 ']' 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 [2024-11-19 10:59:24.651614] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.832 [2024-11-19 10:59:24.652526] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:17.832 [2024-11-19 10:59:24.652559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.832 [2024-11-19 10:59:24.730691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:17.832 [2024-11-19 10:59:24.774163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.832 [2024-11-19 10:59:24.774203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.832 [2024-11-19 10:59:24.774210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.832 [2024-11-19 10:59:24.774216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.832 [2024-11-19 10:59:24.774221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.832 [2024-11-19 10:59:24.775797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.832 [2024-11-19 10:59:24.775904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.832 [2024-11-19 10:59:24.776014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:17.832 [2024-11-19 10:59:24.776013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.832 [2024-11-19 10:59:24.843121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:17.832 [2024-11-19 10:59:24.844210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.832 [2024-11-19 10:59:24.844243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:17.832 [2024-11-19 10:59:24.844677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:17.832 [2024-11-19 10:59:24.844732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 [2024-11-19 10:59:24.912844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 Malloc0 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 [2024-11-19 10:59:24.993117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:17.832 test case1: single bdev can't be used in multiple subsystems 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.832 [2024-11-19 10:59:25.024534] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:17.832 [2024-11-19 10:59:25.024555] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:17.832 [2024-11-19 10:59:25.024562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.832 request: 00:32:17.832 { 00:32:17.832 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:17.832 "namespace": { 00:32:17.832 "bdev_name": "Malloc0", 00:32:17.832 "no_auto_visible": false 00:32:17.832 }, 00:32:17.832 "method": "nvmf_subsystem_add_ns", 00:32:17.832 "req_id": 1 00:32:17.832 } 00:32:17.832 Got JSON-RPC error response 00:32:17.832 response: 00:32:17.832 { 00:32:17.832 "code": -32602, 00:32:17.832 "message": "Invalid parameters" 00:32:17.832 } 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:17.832 Adding namespace failed - expected result. 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:17.832 test case2: host connect to nvmf target in multiple paths 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.832 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:17.833 [2024-11-19 10:59:25.036631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:17.833 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.833 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:17.833 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:18.091 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:18.091 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:18.091 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:18.091 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:18.091 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:20.623 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:20.623 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:20.624 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:20.624 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:20.624 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:20.624 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:20.624 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:20.624 [global] 00:32:20.624 thread=1 00:32:20.624 invalidate=1 00:32:20.624 rw=write 00:32:20.624 time_based=1 00:32:20.624 runtime=1 00:32:20.624 ioengine=libaio 00:32:20.624 direct=1 00:32:20.624 bs=4096 00:32:20.624 iodepth=1 00:32:20.624 norandommap=0 00:32:20.624 numjobs=1 00:32:20.624 00:32:20.624 verify_dump=1 00:32:20.624 verify_backlog=512 00:32:20.624 verify_state_save=0 00:32:20.624 do_verify=1 00:32:20.624 verify=crc32c-intel 00:32:20.624 [job0] 00:32:20.624 filename=/dev/nvme0n1 00:32:20.624 Could not set queue depth (nvme0n1) 00:32:20.624 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:20.624 fio-3.35 00:32:20.624 Starting 1 thread 00:32:21.560 00:32:21.560 job0: (groupid=0, jobs=1): err= 0: pid=1914535: Tue Nov 19 10:59:28 2024 00:32:21.560 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:32:21.560 slat (nsec): min=9780, max=23893, avg=21712.57, stdev=2708.86 00:32:21.560 clat (usec): min=40866, max=41084, avg=40973.53, stdev=56.45 00:32:21.560 lat (usec): min=40888, max=41108, avg=40995.25, stdev=55.83 00:32:21.560 clat percentiles (usec): 00:32:21.560 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:21.560 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:21.560 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:21.560 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:21.560 | 99.99th=[41157] 00:32:21.560 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:32:21.560 slat (nsec): min=9834, max=44831, avg=11127.65, stdev=2485.21 00:32:21.560 clat (usec): min=130, max=355, avg=142.05, stdev=11.44 00:32:21.560 lat (usec): min=140, max=400, avg=153.18, stdev=12.86 00:32:21.560 clat percentiles (usec): 00:32:21.560 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:32:21.560 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:32:21.560 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 147], 95.00th=[ 151], 00:32:21.560 | 99.00th=[ 169], 99.50th=[ 182], 99.90th=[ 355], 99.95th=[ 355], 00:32:21.560 | 99.99th=[ 355] 00:32:21.560 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:21.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:21.560 lat (usec) : 250=95.51%, 500=0.19% 00:32:21.560 lat (msec) : 50=4.30% 00:32:21.560 cpu : usr=0.59%, sys=0.59%, ctx=535, majf=0, minf=1 00:32:21.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.560 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:21.560 00:32:21.560 Run status group 0 (all jobs): 00:32:21.560 READ: bw=89.9KiB/s (92.1kB/s), 89.9KiB/s-89.9KiB/s (92.1kB/s-92.1kB/s), io=92.0KiB (94.2kB), run=1023-1023msec 00:32:21.560 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:32:21.560 00:32:21.560 Disk stats (read/write): 00:32:21.560 nvme0n1: ios=69/512, merge=0/0, ticks=800/70, in_queue=870, util=91.18% 00:32:21.560 10:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:21.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:21.819 rmmod nvme_tcp 00:32:21.819 rmmod nvme_fabrics 00:32:21.819 rmmod nvme_keyring 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1913925 ']' 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1913925 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1913925 ']' 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1913925 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:21.819 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913925 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913925' 00:32:22.077 killing process with pid 1913925 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1913925 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1913925 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:22.077 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:22.078 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:22.078 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:22.078 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.078 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.078 10:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:24.612 00:32:24.612 real 0m13.098s 00:32:24.612 user 0m24.352s 00:32:24.612 sys 0m5.982s 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:24.612 ************************************ 00:32:24.612 END TEST nvmf_nmic 00:32:24.612 ************************************ 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:24.612 ************************************ 00:32:24.612 START TEST nvmf_fio_target 00:32:24.612 ************************************ 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:24.612 * Looking for test storage... 00:32:24.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:24.612 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:24.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.613 --rc genhtml_branch_coverage=1 00:32:24.613 --rc genhtml_function_coverage=1 00:32:24.613 --rc genhtml_legend=1 00:32:24.613 --rc geninfo_all_blocks=1 00:32:24.613 --rc geninfo_unexecuted_blocks=1 00:32:24.613 00:32:24.613 ' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:24.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.613 --rc genhtml_branch_coverage=1 00:32:24.613 --rc genhtml_function_coverage=1 00:32:24.613 --rc genhtml_legend=1 00:32:24.613 --rc geninfo_all_blocks=1 00:32:24.613 --rc geninfo_unexecuted_blocks=1 00:32:24.613 00:32:24.613 ' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:24.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.613 --rc genhtml_branch_coverage=1 00:32:24.613 --rc genhtml_function_coverage=1 00:32:24.613 --rc genhtml_legend=1 00:32:24.613 --rc geninfo_all_blocks=1 00:32:24.613 --rc geninfo_unexecuted_blocks=1 00:32:24.613 00:32:24.613 ' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:24.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.613 --rc genhtml_branch_coverage=1 00:32:24.613 --rc genhtml_function_coverage=1 00:32:24.613 --rc genhtml_legend=1 00:32:24.613 --rc geninfo_all_blocks=1 00:32:24.613 --rc geninfo_unexecuted_blocks=1 00:32:24.613 00:32:24.613 ' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:24.613 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:24.614 10:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:31.182 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:31.183 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:31.183 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:31.183 Found net devices under 0000:86:00.0: cvl_0_0 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:31.183 Found net devices under 0000:86:00.1: cvl_0_1 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:31.183 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:31.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:32:31.183 00:32:31.183 --- 10.0.0.2 ping statistics --- 00:32:31.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.184 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:32:31.184 00:32:31.184 --- 10.0.0.1 ping statistics --- 00:32:31.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.184 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1918283 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1918283 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1918283 ']' 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.184 [2024-11-19 10:59:37.773186] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:31.184 [2024-11-19 10:59:37.774156] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:31.184 [2024-11-19 10:59:37.774196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.184 [2024-11-19 10:59:37.850979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:31.184 [2024-11-19 10:59:37.892886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.184 [2024-11-19 10:59:37.892924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.184 [2024-11-19 10:59:37.892932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.184 [2024-11-19 10:59:37.892937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.184 [2024-11-19 10:59:37.892943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.184 [2024-11-19 10:59:37.894395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.184 [2024-11-19 10:59:37.894501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:31.184 [2024-11-19 10:59:37.894588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.184 [2024-11-19 10:59:37.894589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:31.184 [2024-11-19 10:59:37.963064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.184 [2024-11-19 10:59:37.963362] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.184 [2024-11-19 10:59:37.963944] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:31.184 [2024-11-19 10:59:37.964246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:31.184 [2024-11-19 10:59:37.964301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.184 10:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.184 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.184 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:31.184 [2024-11-19 10:59:38.211391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.184 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:31.184 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:31.184 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:31.443 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:31.443 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:31.702 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:31.702 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:31.702 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:31.702 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:31.961 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:32.219 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:32.219 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:32.479 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:32.479 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:32.738 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:32.738 10:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:32.738 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:32.996 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:32.996 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:33.255 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:33.255 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:33.572 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:33.572 [2024-11-19 10:59:40.887324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.572 10:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:33.871 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:34.130 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:34.389 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:34.389 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:34.389 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:34.389 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:34.389 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:34.389 10:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:36.294 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:36.294 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:36.294 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:36.294 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:36.294 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:36.294 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:36.294 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:36.294 [global] 00:32:36.294 thread=1 00:32:36.294 invalidate=1 00:32:36.294 rw=write 00:32:36.294 time_based=1 00:32:36.294 runtime=1 00:32:36.294 ioengine=libaio 00:32:36.294 direct=1 00:32:36.294 bs=4096 00:32:36.294 iodepth=1 00:32:36.294 norandommap=0 00:32:36.294 numjobs=1 00:32:36.294 00:32:36.294 verify_dump=1 00:32:36.294 verify_backlog=512 00:32:36.294 verify_state_save=0 00:32:36.294 do_verify=1 00:32:36.294 verify=crc32c-intel 00:32:36.294 [job0] 00:32:36.294 filename=/dev/nvme0n1 00:32:36.294 [job1] 00:32:36.294 filename=/dev/nvme0n2 00:32:36.294 [job2] 00:32:36.294 filename=/dev/nvme0n3 00:32:36.294 [job3] 00:32:36.294 filename=/dev/nvme0n4 00:32:36.294 Could not set queue depth (nvme0n1) 00:32:36.294 Could not set queue depth (nvme0n2) 00:32:36.294 Could not set queue depth (nvme0n3) 00:32:36.294 Could not set queue depth (nvme0n4) 00:32:36.553 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:36.553 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:36.553 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:36.553 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:36.553 fio-3.35 00:32:36.553 Starting 4 threads 00:32:37.930 00:32:37.930 job0: (groupid=0, jobs=1): err= 0: pid=1919413: Tue Nov 19 10:59:45 2024 00:32:37.930 read: IOPS=2240, BW=8963KiB/s (9178kB/s)(8972KiB/1001msec) 00:32:37.930 slat (nsec): min=6988, max=38919, avg=8099.83, stdev=1335.74 00:32:37.930 clat (usec): min=169, max=504, avg=234.79, stdev=32.04 00:32:37.930 lat (usec): min=177, max=512, avg=242.89, stdev=32.09 00:32:37.930 clat percentiles (usec): 00:32:37.930 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 208], 00:32:37.930 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:32:37.930 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 277], 00:32:37.930 | 99.00th=[ 330], 99.50th=[ 400], 99.90th=[ 490], 99.95th=[ 490], 00:32:37.930 | 99.99th=[ 506] 00:32:37.930 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:37.930 slat (nsec): min=10145, max=63661, avg=12120.08, stdev=2450.22 00:32:37.930 clat (usec): min=122, max=571, avg=159.83, stdev=31.53 00:32:37.930 lat (usec): min=133, max=582, avg=171.95, stdev=32.50 00:32:37.930 clat percentiles (usec): 00:32:37.930 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:32:37.930 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 163], 00:32:37.930 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 212], 00:32:37.930 | 99.00th=[ 249], 99.50th=[ 277], 99.90th=[ 420], 99.95th=[ 478], 00:32:37.930 | 99.99th=[ 570] 00:32:37.930 bw ( KiB/s): min=11160, max=11160, per=45.86%, avg=11160.00, stdev= 0.00, samples=1 00:32:37.930 iops : min= 2790, max= 2790, avg=2790.00, stdev= 0.00, samples=1 00:32:37.930 lat (usec) : 250=90.69%, 500=9.27%, 750=0.04% 00:32:37.930 cpu : usr=4.90%, sys=6.90%, ctx=4804, majf=0, minf=1 00:32:37.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.930 issued rwts: total=2243,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:37.930 job1: (groupid=0, jobs=1): err= 0: pid=1919417: Tue Nov 19 10:59:45 2024 00:32:37.930 read: IOPS=2250, BW=9003KiB/s (9219kB/s)(9012KiB/1001msec) 00:32:37.930 slat (nsec): min=7303, max=24547, avg=8479.72, stdev=991.18 00:32:37.930 clat (usec): min=167, max=497, avg=243.71, stdev=47.86 00:32:37.930 lat (usec): min=175, max=505, avg=252.19, stdev=47.89 00:32:37.930 clat percentiles (usec): 00:32:37.930 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 212], 00:32:37.930 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:32:37.930 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 375], 00:32:37.930 | 99.00th=[ 424], 99.50th=[ 441], 99.90th=[ 486], 99.95th=[ 486], 00:32:37.930 | 99.99th=[ 498] 00:32:37.930 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:37.930 slat (nsec): min=10934, max=40676, avg=12025.34, stdev=1801.21 00:32:37.930 clat (usec): min=121, max=565, avg=151.03, stdev=21.19 00:32:37.930 lat (usec): min=132, max=577, avg=163.06, stdev=21.41 00:32:37.930 clat percentiles (usec): 00:32:37.930 | 1.00th=[ 127], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:32:37.930 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:32:37.930 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 186], 00:32:37.930 | 99.00th=[ 204], 99.50th=[ 225], 99.90th=[ 334], 99.95th=[ 469], 00:32:37.930 | 99.99th=[ 562] 00:32:37.930 bw ( KiB/s): min=10672, max=10672, per=43.86%, avg=10672.00, stdev= 0.00, samples=1 00:32:37.930 iops : min= 2668, max= 2668, avg=2668.00, stdev= 0.00, samples=1 00:32:37.930 lat (usec) : 250=85.56%, 500=14.42%, 750=0.02% 00:32:37.930 cpu : usr=3.00%, sys=8.80%, ctx=4816, majf=0, minf=1 00:32:37.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.930 issued rwts: total=2253,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:37.930 job2: (groupid=0, jobs=1): err= 0: pid=1919425: Tue Nov 19 10:59:45 2024 00:32:37.930 read: IOPS=23, BW=95.3KiB/s (97.6kB/s)(96.0KiB/1007msec) 00:32:37.930 slat (nsec): min=11249, max=22778, avg=20853.54, stdev=3538.36 00:32:37.930 clat (usec): min=270, max=41123, avg=37557.20, stdev=11484.47 00:32:37.930 lat (usec): min=292, max=41146, avg=37578.06, stdev=11484.07 00:32:37.930 clat percentiles (usec): 00:32:37.930 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[40633], 20.00th=[40633], 00:32:37.930 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:37.930 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:37.930 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:37.930 | 99.99th=[41157] 00:32:37.930 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:32:37.930 slat (nsec): min=12205, max=39638, avg=13404.79, stdev=2257.60 00:32:37.931 clat (usec): min=163, max=273, avg=187.43, stdev=13.08 00:32:37.931 lat (usec): min=176, max=313, avg=200.84, stdev=13.71 00:32:37.931 clat percentiles (usec): 00:32:37.931 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:32:37.931 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:32:37.931 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 215], 00:32:37.931 | 99.00th=[ 229], 99.50th=[ 235], 99.90th=[ 273], 99.95th=[ 273], 00:32:37.931 | 99.99th=[ 273] 00:32:37.931 bw ( KiB/s): min= 4096, max= 4096, per=16.83%, avg=4096.00, stdev= 0.00, samples=1 00:32:37.931 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:37.931 lat (usec) : 250=95.34%, 500=0.56% 00:32:37.931 lat (msec) : 50=4.10% 00:32:37.931 cpu : usr=0.89%, sys=0.60%, ctx=536, majf=0, minf=1 00:32:37.931 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.931 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.931 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:37.931 job3: (groupid=0, jobs=1): err= 0: pid=1919428: Tue Nov 19 10:59:45 2024 00:32:37.931 read: IOPS=67, BW=269KiB/s (276kB/s)(272KiB/1010msec) 00:32:37.931 slat (nsec): min=7934, max=30159, avg=11453.32, stdev=4691.18 00:32:37.931 clat (usec): min=230, max=41193, avg=13418.82, stdev=19144.31 00:32:37.931 lat (usec): min=239, max=41201, avg=13430.27, stdev=19145.90 00:32:37.931 clat percentiles (usec): 00:32:37.931 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 260], 00:32:37.931 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 289], 60.00th=[ 297], 00:32:37.931 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:37.931 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:37.931 | 99.99th=[41157] 00:32:37.931 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:32:37.931 slat (nsec): min=9481, max=37587, avg=11365.33, stdev=2691.76 00:32:37.931 clat (usec): min=148, max=324, avg=174.23, stdev=14.81 00:32:37.931 lat (usec): min=159, max=362, avg=185.59, stdev=15.58 00:32:37.931 clat percentiles (usec): 00:32:37.931 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:32:37.931 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:32:37.931 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:32:37.931 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 326], 99.95th=[ 326], 00:32:37.931 | 99.99th=[ 326] 00:32:37.931 bw ( KiB/s): min= 4096, max= 4096, per=16.83%, avg=4096.00, stdev= 0.00, samples=1 00:32:37.931 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:37.931 lat (usec) : 250=89.14%, 500=7.07% 00:32:37.931 lat (msec) : 50=3.79% 00:32:37.931 cpu : usr=0.30%, sys=0.59%, ctx=580, majf=0, minf=1 00:32:37.931 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.931 issued rwts: total=68,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.931 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:37.931 00:32:37.931 Run status group 0 (all jobs): 00:32:37.931 READ: bw=17.7MiB/s (18.6MB/s), 95.3KiB/s-9003KiB/s (97.6kB/s-9219kB/s), io=17.9MiB (18.8MB), run=1001-1010msec 00:32:37.931 WRITE: bw=23.8MiB/s (24.9MB/s), 2028KiB/s-9.99MiB/s (2076kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1010msec 00:32:37.931 00:32:37.931 Disk stats (read/write): 00:32:37.931 nvme0n1: ios=2020/2048, merge=0/0, ticks=466/305, in_queue=771, util=87.07% 00:32:37.931 nvme0n2: ios=2008/2048, merge=0/0, ticks=1447/292, in_queue=1739, util=98.07% 00:32:37.931 nvme0n3: ios=20/512, merge=0/0, ticks=738/90, in_queue=828, util=88.92% 00:32:37.931 nvme0n4: ios=30/512, merge=0/0, ticks=739/80, in_queue=819, util=89.58% 00:32:37.931 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:37.931 [global] 00:32:37.931 thread=1 00:32:37.931 invalidate=1 00:32:37.931 rw=randwrite 00:32:37.931 time_based=1 00:32:37.931 runtime=1 00:32:37.931 ioengine=libaio 00:32:37.931 direct=1 00:32:37.931 bs=4096 00:32:37.931 iodepth=1 00:32:37.931 norandommap=0 00:32:37.931 numjobs=1 00:32:37.931 00:32:37.931 verify_dump=1 00:32:37.931 verify_backlog=512 00:32:37.931 verify_state_save=0 00:32:37.931 do_verify=1 00:32:37.931 verify=crc32c-intel 00:32:37.931 [job0] 00:32:37.931 filename=/dev/nvme0n1 00:32:37.931 [job1] 00:32:37.931 filename=/dev/nvme0n2 00:32:37.931 [job2] 00:32:37.931 filename=/dev/nvme0n3 00:32:37.931 [job3] 00:32:37.931 filename=/dev/nvme0n4 00:32:37.931 Could not set queue depth (nvme0n1) 00:32:37.931 Could not set queue depth (nvme0n2) 00:32:37.931 Could not set queue depth (nvme0n3) 00:32:37.931 Could not set queue depth (nvme0n4) 00:32:38.190 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:38.190 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:38.190 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:38.190 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:38.190 fio-3.35 00:32:38.190 Starting 4 threads 00:32:39.567 00:32:39.567 job0: (groupid=0, jobs=1): err= 0: pid=1919801: Tue Nov 19 10:59:46 2024 00:32:39.567 read: IOPS=1641, BW=6565KiB/s (6723kB/s)(6572KiB/1001msec) 00:32:39.567 slat (nsec): min=6793, max=23713, avg=7791.45, stdev=1736.41 00:32:39.567 clat (usec): min=207, max=669, avg=334.73, stdev=77.64 00:32:39.567 lat (usec): min=214, max=679, avg=342.52, stdev=77.85 00:32:39.567 clat percentiles (usec): 00:32:39.567 | 1.00th=[ 227], 5.00th=[ 247], 10.00th=[ 260], 20.00th=[ 273], 00:32:39.567 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 330], 00:32:39.567 | 70.00th=[ 363], 80.00th=[ 400], 90.00th=[ 445], 95.00th=[ 494], 00:32:39.567 | 99.00th=[ 553], 99.50th=[ 652], 99.90th=[ 668], 99.95th=[ 668], 00:32:39.567 | 99.99th=[ 668] 00:32:39.567 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:39.567 slat (nsec): min=6826, max=40878, avg=10533.97, stdev=1802.52 00:32:39.567 clat (usec): min=129, max=375, avg=199.17, stdev=35.68 00:32:39.567 lat (usec): min=146, max=385, avg=209.70, stdev=35.72 00:32:39.567 clat percentiles (usec): 00:32:39.567 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:32:39.567 | 30.00th=[ 172], 40.00th=[ 190], 50.00th=[ 204], 60.00th=[ 212], 00:32:39.567 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 258], 00:32:39.567 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 318], 99.95th=[ 322], 00:32:39.567 | 99.99th=[ 375] 00:32:39.567 bw ( KiB/s): min= 8192, max= 8192, per=24.17%, avg=8192.00, stdev= 0.00, samples=1 00:32:39.567 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:39.567 lat (usec) : 250=54.27%, 500=43.81%, 750=1.92% 00:32:39.567 cpu : usr=1.80%, sys=3.60%, ctx=3693, majf=0, minf=1 00:32:39.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.567 issued rwts: total=1643,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:39.567 job1: (groupid=0, jobs=1): err= 0: pid=1919806: Tue Nov 19 10:59:46 2024 00:32:39.567 read: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec) 00:32:39.567 slat (nsec): min=6439, max=22828, avg=7559.64, stdev=969.38 00:32:39.567 clat (usec): min=200, max=674, avg=322.41, stdev=75.93 00:32:39.567 lat (usec): min=207, max=681, avg=329.97, stdev=75.90 00:32:39.567 clat percentiles (usec): 00:32:39.567 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 255], 00:32:39.567 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 322], 00:32:39.567 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 445], 95.00th=[ 478], 00:32:39.567 | 99.00th=[ 510], 99.50th=[ 519], 99.90th=[ 644], 99.95th=[ 676], 00:32:39.567 | 99.99th=[ 676] 00:32:39.567 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:39.567 slat (nsec): min=9538, max=37652, avg=10813.93, stdev=1417.92 00:32:39.567 clat (usec): min=143, max=869, avg=211.27, stdev=47.20 00:32:39.567 lat (usec): min=153, max=880, avg=222.08, stdev=47.11 00:32:39.567 clat percentiles (usec): 00:32:39.567 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:32:39.567 | 30.00th=[ 184], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 215], 00:32:39.567 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 289], 00:32:39.567 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 635], 99.95th=[ 799], 00:32:39.567 | 99.99th=[ 873] 00:32:39.567 bw ( KiB/s): min= 8192, max= 8192, per=24.17%, avg=8192.00, stdev= 0.00, samples=1 00:32:39.567 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:39.567 lat (usec) : 250=57.13%, 500=41.65%, 750=1.17%, 1000=0.05% 00:32:39.567 cpu : usr=1.50%, sys=4.00%, ctx=3678, majf=0, minf=1 00:32:39.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.567 issued rwts: total=1628,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:39.567 job2: (groupid=0, jobs=1): err= 0: pid=1919823: Tue Nov 19 10:59:46 2024 00:32:39.567 read: IOPS=1527, BW=6111KiB/s (6258kB/s)(6160KiB/1008msec) 00:32:39.567 slat (nsec): min=7208, max=30829, avg=10285.49, stdev=2048.66 00:32:39.567 clat (usec): min=209, max=41162, avg=387.83, stdev=2308.49 00:32:39.567 lat (usec): min=224, max=41176, avg=398.12, stdev=2308.63 00:32:39.567 clat percentiles (usec): 00:32:39.567 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:32:39.567 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:32:39.567 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 408], 00:32:39.567 | 99.00th=[ 437], 99.50th=[ 445], 99.90th=[41157], 99.95th=[41157], 00:32:39.567 | 99.99th=[41157] 00:32:39.567 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:32:39.567 slat (nsec): min=9850, max=42772, avg=13990.74, stdev=3148.08 00:32:39.567 clat (usec): min=137, max=676, avg=172.42, stdev=22.70 00:32:39.567 lat (usec): min=150, max=687, avg=186.41, stdev=22.93 00:32:39.567 clat percentiles (usec): 00:32:39.567 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:32:39.567 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:32:39.567 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:32:39.567 | 99.00th=[ 215], 99.50th=[ 235], 99.90th=[ 578], 99.95th=[ 635], 00:32:39.567 | 99.99th=[ 676] 00:32:39.567 bw ( KiB/s): min= 8192, max= 8192, per=24.17%, avg=8192.00, stdev= 0.00, samples=2 00:32:39.567 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:32:39.567 lat (usec) : 250=84.75%, 500=14.99%, 750=0.11% 00:32:39.567 lat (msec) : 50=0.14% 00:32:39.567 cpu : usr=2.48%, sys=4.47%, ctx=3589, majf=0, minf=1 00:32:39.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.567 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:39.567 job3: (groupid=0, jobs=1): err= 0: pid=1919828: Tue Nov 19 10:59:46 2024 00:32:39.567 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:39.567 slat (nsec): min=7696, max=54018, avg=9738.03, stdev=2049.09 00:32:39.567 clat (usec): min=209, max=467, avg=246.73, stdev=18.94 00:32:39.567 lat (usec): min=218, max=486, avg=256.46, stdev=19.50 00:32:39.567 clat percentiles (usec): 00:32:39.567 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:32:39.567 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:32:39.567 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:32:39.567 | 99.00th=[ 289], 99.50th=[ 371], 99.90th=[ 457], 99.95th=[ 465], 00:32:39.567 | 99.99th=[ 469] 00:32:39.568 write: IOPS=2393, BW=9574KiB/s (9804kB/s)(9584KiB/1001msec); 0 zone resets 00:32:39.568 slat (nsec): min=9639, max=42312, avg=12805.76, stdev=2743.54 00:32:39.568 clat (usec): min=132, max=297, avg=179.55, stdev=17.66 00:32:39.568 lat (usec): min=158, max=311, avg=192.36, stdev=18.15 00:32:39.568 clat percentiles (usec): 00:32:39.568 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:32:39.568 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:32:39.568 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 212], 00:32:39.568 | 99.00th=[ 237], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 297], 00:32:39.568 | 99.99th=[ 297] 00:32:39.568 bw ( KiB/s): min= 9672, max= 9672, per=28.54%, avg=9672.00, stdev= 0.00, samples=1 00:32:39.568 iops : min= 2418, max= 2418, avg=2418.00, stdev= 0.00, samples=1 00:32:39.568 lat (usec) : 250=85.15%, 500=14.85% 00:32:39.568 cpu : usr=4.20%, sys=6.90%, ctx=4445, majf=0, minf=1 00:32:39.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.568 issued rwts: total=2048,2396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:39.568 00:32:39.568 Run status group 0 (all jobs): 00:32:39.568 READ: bw=26.6MiB/s (27.9MB/s), 6111KiB/s-8184KiB/s (6258kB/s-8380kB/s), io=26.8MiB (28.1MB), run=1001-1008msec 00:32:39.568 WRITE: bw=33.1MiB/s (34.7MB/s), 8127KiB/s-9574KiB/s (8322kB/s-9804kB/s), io=33.4MiB (35.0MB), run=1001-1008msec 00:32:39.568 00:32:39.568 Disk stats (read/write): 00:32:39.568 nvme0n1: ios=1471/1536, merge=0/0, ticks=862/321, in_queue=1183, util=97.29% 00:32:39.568 nvme0n2: ios=1451/1536, merge=0/0, ticks=1382/336, in_queue=1718, util=97.06% 00:32:39.568 nvme0n3: ios=1577/2048, merge=0/0, ticks=608/341, in_queue=949, util=97.09% 00:32:39.568 nvme0n4: ios=1770/2048, merge=0/0, ticks=621/339, in_queue=960, util=97.06% 00:32:39.568 10:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:39.568 [global] 00:32:39.568 thread=1 00:32:39.568 invalidate=1 00:32:39.568 rw=write 00:32:39.568 time_based=1 00:32:39.568 runtime=1 00:32:39.568 ioengine=libaio 00:32:39.568 direct=1 00:32:39.568 bs=4096 00:32:39.568 iodepth=128 00:32:39.568 norandommap=0 00:32:39.568 numjobs=1 00:32:39.568 00:32:39.568 verify_dump=1 00:32:39.568 verify_backlog=512 00:32:39.568 verify_state_save=0 00:32:39.568 do_verify=1 00:32:39.568 verify=crc32c-intel 00:32:39.568 [job0] 00:32:39.568 filename=/dev/nvme0n1 00:32:39.568 [job1] 00:32:39.568 filename=/dev/nvme0n2 00:32:39.568 [job2] 00:32:39.568 filename=/dev/nvme0n3 00:32:39.568 [job3] 00:32:39.568 filename=/dev/nvme0n4 00:32:39.568 Could not set queue depth (nvme0n1) 00:32:39.568 Could not set queue depth (nvme0n2) 00:32:39.568 Could not set queue depth (nvme0n3) 00:32:39.568 Could not set queue depth (nvme0n4) 00:32:39.836 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:39.836 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:39.836 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:39.836 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:39.836 fio-3.35 00:32:39.836 Starting 4 threads 00:32:41.213 00:32:41.213 job0: (groupid=0, jobs=1): err= 0: pid=1920204: Tue Nov 19 10:59:48 2024 00:32:41.213 read: IOPS=5834, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1008msec) 00:32:41.213 slat (nsec): min=1372, max=10786k, avg=93391.34, stdev=781230.86 00:32:41.213 clat (usec): min=1013, max=22219, avg=11520.52, stdev=2776.41 00:32:41.213 lat (usec): min=3140, max=23753, avg=11613.91, stdev=2858.35 00:32:41.213 clat percentiles (usec): 00:32:41.213 | 1.00th=[ 6521], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[ 9634], 00:32:41.213 | 30.00th=[10028], 40.00th=[10683], 50.00th=[10945], 60.00th=[11338], 00:32:41.213 | 70.00th=[11731], 80.00th=[12911], 90.00th=[15664], 95.00th=[17695], 00:32:41.213 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21890], 99.95th=[21890], 00:32:41.213 | 99.99th=[22152] 00:32:41.213 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:32:41.213 slat (usec): min=2, max=9431, avg=69.35, stdev=482.35 00:32:41.213 clat (usec): min=1455, max=21793, avg=9784.74, stdev=2396.14 00:32:41.213 lat (usec): min=1471, max=21796, avg=9854.10, stdev=2424.49 00:32:41.213 clat percentiles (usec): 00:32:41.213 | 1.00th=[ 3425], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 7767], 00:32:41.213 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10290], 00:32:41.213 | 70.00th=[10683], 80.00th=[11731], 90.00th=[11994], 95.00th=[14091], 00:32:41.213 | 99.00th=[16057], 99.50th=[16188], 99.90th=[20579], 99.95th=[21103], 00:32:41.213 | 99.99th=[21890] 00:32:41.213 bw ( KiB/s): min=24576, max=24576, per=31.69%, avg=24576.00, stdev= 0.00, samples=2 00:32:41.213 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:32:41.213 lat (msec) : 2=0.12%, 4=0.73%, 10=39.38%, 20=58.78%, 50=1.00% 00:32:41.213 cpu : usr=4.07%, sys=6.06%, ctx=486, majf=0, minf=2 00:32:41.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:41.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.213 issued rwts: total=5881,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.213 job1: (groupid=0, jobs=1): err= 0: pid=1920217: Tue Nov 19 10:59:48 2024 00:32:41.213 read: IOPS=5322, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1001msec) 00:32:41.213 slat (nsec): min=1575, max=3902.4k, avg=87161.96, stdev=456105.74 00:32:41.213 clat (usec): min=489, max=22775, avg=11275.12, stdev=1827.59 00:32:41.213 lat (usec): min=1856, max=22777, avg=11362.29, stdev=1856.20 00:32:41.213 clat percentiles (usec): 00:32:41.213 | 1.00th=[ 4555], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10028], 00:32:41.213 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:32:41.213 | 70.00th=[11994], 80.00th=[12387], 90.00th=[13173], 95.00th=[13960], 00:32:41.213 | 99.00th=[16450], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:32:41.213 | 99.99th=[22676] 00:32:41.213 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:32:41.213 slat (usec): min=2, max=22939, avg=90.20, stdev=600.47 00:32:41.213 clat (usec): min=6853, max=50754, avg=11818.19, stdev=5039.14 00:32:41.213 lat (usec): min=6859, max=50772, avg=11908.39, stdev=5070.97 00:32:41.213 clat percentiles (usec): 00:32:41.213 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10290], 00:32:41.213 | 30.00th=[10552], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:32:41.213 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[13173], 00:32:41.213 | 99.00th=[46924], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:32:41.213 | 99.99th=[50594] 00:32:41.213 bw ( KiB/s): min=20600, max=20600, per=26.56%, avg=20600.00, stdev= 0.00, samples=1 00:32:41.213 iops : min= 5150, max= 5150, avg=5150.00, stdev= 0.00, samples=1 00:32:41.213 lat (usec) : 500=0.01% 00:32:41.213 lat (msec) : 2=0.14%, 10=16.37%, 20=81.93%, 50=1.53%, 100=0.02% 00:32:41.213 cpu : usr=3.70%, sys=6.00%, ctx=535, majf=0, minf=1 00:32:41.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:41.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.213 issued rwts: total=5328,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.213 job2: (groupid=0, jobs=1): err= 0: pid=1920234: Tue Nov 19 10:59:48 2024 00:32:41.213 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:32:41.213 slat (nsec): min=1161, max=12406k, avg=111622.68, stdev=865108.57 00:32:41.213 clat (usec): min=3677, max=32871, avg=14580.29, stdev=3885.02 00:32:41.213 lat (usec): min=3686, max=32878, avg=14691.91, stdev=3966.06 00:32:41.213 clat percentiles (usec): 00:32:41.213 | 1.00th=[ 5932], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[12387], 00:32:41.213 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13829], 00:32:41.213 | 70.00th=[15401], 80.00th=[18220], 90.00th=[20841], 95.00th=[21890], 00:32:41.213 | 99.00th=[25560], 99.50th=[25560], 99.90th=[26870], 99.95th=[28705], 00:32:41.213 | 99.99th=[32900] 00:32:41.213 write: IOPS=4664, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1007msec); 0 zone resets 00:32:41.213 slat (nsec): min=1996, max=20208k, avg=84628.56, stdev=728023.07 00:32:41.213 clat (usec): min=196, max=38613, avg=12371.53, stdev=4319.12 00:32:41.213 lat (usec): min=209, max=38625, avg=12456.16, stdev=4374.49 00:32:41.213 clat percentiles (usec): 00:32:41.213 | 1.00th=[ 2540], 5.00th=[ 6325], 10.00th=[ 7570], 20.00th=[ 9765], 00:32:41.213 | 30.00th=[10814], 40.00th=[11338], 50.00th=[12125], 60.00th=[12911], 00:32:41.213 | 70.00th=[13698], 80.00th=[13829], 90.00th=[16909], 95.00th=[21103], 00:32:41.213 | 99.00th=[27395], 99.50th=[31851], 99.90th=[32113], 99.95th=[32375], 00:32:41.213 | 99.99th=[38536] 00:32:41.213 bw ( KiB/s): min=17584, max=19328, per=23.80%, avg=18456.00, stdev=1233.19, samples=2 00:32:41.213 iops : min= 4396, max= 4832, avg=4614.00, stdev=308.30, samples=2 00:32:41.213 lat (usec) : 250=0.01%, 1000=0.10% 00:32:41.213 lat (msec) : 2=0.20%, 4=0.70%, 10=13.39%, 20=76.14%, 50=9.46% 00:32:41.213 cpu : usr=3.78%, sys=5.67%, ctx=307, majf=0, minf=1 00:32:41.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:41.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.213 issued rwts: total=4608,4697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.213 job3: (groupid=0, jobs=1): err= 0: pid=1920239: Tue Nov 19 10:59:48 2024 00:32:41.213 read: IOPS=2914, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1008msec) 00:32:41.213 slat (nsec): min=1319, max=22489k, avg=159174.81, stdev=1076537.73 00:32:41.213 clat (usec): min=4471, max=95313, avg=22927.48, stdev=15535.38 00:32:41.213 lat (usec): min=8818, max=95341, avg=23086.65, stdev=15626.40 00:32:41.213 clat percentiles (usec): 00:32:41.214 | 1.00th=[ 9372], 5.00th=[11338], 10.00th=[11994], 20.00th=[12911], 00:32:41.214 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14615], 60.00th=[16319], 00:32:41.214 | 70.00th=[25035], 80.00th=[32900], 90.00th=[47973], 95.00th=[55313], 00:32:41.214 | 99.00th=[80217], 99.50th=[81265], 99.90th=[94897], 99.95th=[94897], 00:32:41.214 | 99.99th=[94897] 00:32:41.214 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:32:41.214 slat (usec): min=2, max=28915, avg=159.24, stdev=963.17 00:32:41.214 clat (usec): min=2278, max=83417, avg=18360.33, stdev=14405.90 00:32:41.214 lat (usec): min=2292, max=83423, avg=18519.58, stdev=14508.84 00:32:41.214 clat percentiles (usec): 00:32:41.214 | 1.00th=[ 5145], 5.00th=[ 9765], 10.00th=[11600], 20.00th=[12780], 00:32:41.214 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13435], 00:32:41.214 | 70.00th=[13698], 80.00th=[17695], 90.00th=[30540], 95.00th=[57410], 00:32:41.214 | 99.00th=[77071], 99.50th=[81265], 99.90th=[83362], 99.95th=[83362], 00:32:41.214 | 99.99th=[83362] 00:32:41.214 bw ( KiB/s): min= 8192, max=16351, per=15.82%, avg=12271.50, stdev=5769.28, samples=2 00:32:41.214 iops : min= 2048, max= 4087, avg=3067.50, stdev=1441.79, samples=2 00:32:41.214 lat (msec) : 4=0.50%, 10=3.03%, 20=68.45%, 50=21.58%, 100=6.44% 00:32:41.214 cpu : usr=1.99%, sys=6.06%, ctx=254, majf=0, minf=1 00:32:41.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:41.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.214 issued rwts: total=2938,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.214 00:32:41.214 Run status group 0 (all jobs): 00:32:41.214 READ: bw=72.7MiB/s (76.2MB/s), 11.4MiB/s-22.8MiB/s (11.9MB/s-23.9MB/s), io=73.3MiB (76.8MB), run=1001-1008msec 00:32:41.214 WRITE: bw=75.7MiB/s (79.4MB/s), 11.9MiB/s-23.8MiB/s (12.5MB/s-25.0MB/s), io=76.3MiB (80.1MB), run=1001-1008msec 00:32:41.214 00:32:41.214 Disk stats (read/write): 00:32:41.214 nvme0n1: ios=4990/5120, merge=0/0, ticks=55495/49443, in_queue=104938, util=96.69% 00:32:41.214 nvme0n2: ios=4628/4691, merge=0/0, ticks=18047/17554, in_queue=35601, util=96.55% 00:32:41.214 nvme0n3: ios=3642/4096, merge=0/0, ticks=46874/47234, in_queue=94108, util=97.50% 00:32:41.214 nvme0n4: ios=2580/2903, merge=0/0, ticks=18276/16095, in_queue=34371, util=98.22% 00:32:41.214 10:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:41.214 [global] 00:32:41.214 thread=1 00:32:41.214 invalidate=1 00:32:41.214 rw=randwrite 00:32:41.214 time_based=1 00:32:41.214 runtime=1 00:32:41.214 ioengine=libaio 00:32:41.214 direct=1 00:32:41.214 bs=4096 00:32:41.214 iodepth=128 00:32:41.214 norandommap=0 00:32:41.214 numjobs=1 00:32:41.214 00:32:41.214 verify_dump=1 00:32:41.214 verify_backlog=512 00:32:41.214 verify_state_save=0 00:32:41.214 do_verify=1 00:32:41.214 verify=crc32c-intel 00:32:41.214 [job0] 00:32:41.214 filename=/dev/nvme0n1 00:32:41.214 [job1] 00:32:41.214 filename=/dev/nvme0n2 00:32:41.214 [job2] 00:32:41.214 filename=/dev/nvme0n3 00:32:41.214 [job3] 00:32:41.214 filename=/dev/nvme0n4 00:32:41.214 Could not set queue depth (nvme0n1) 00:32:41.214 Could not set queue depth (nvme0n2) 00:32:41.214 Could not set queue depth (nvme0n3) 00:32:41.214 Could not set queue depth (nvme0n4) 00:32:41.472 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:41.472 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:41.472 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:41.472 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:41.472 fio-3.35 00:32:41.472 Starting 4 threads 00:32:42.851 00:32:42.851 job0: (groupid=0, jobs=1): err= 0: pid=1920620: Tue Nov 19 10:59:49 2024 00:32:42.851 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:32:42.851 slat (nsec): min=1186, max=15092k, avg=132057.37, stdev=860774.80 00:32:42.851 clat (usec): min=369, max=44883, avg=17872.35, stdev=8591.56 00:32:42.851 lat (usec): min=709, max=59132, avg=18004.41, stdev=8633.43 00:32:42.851 clat percentiles (usec): 00:32:42.851 | 1.00th=[ 3720], 5.00th=[ 7898], 10.00th=[ 9765], 20.00th=[10552], 00:32:42.851 | 30.00th=[11994], 40.00th=[12518], 50.00th=[15664], 60.00th=[17957], 00:32:42.851 | 70.00th=[21365], 80.00th=[25822], 90.00th=[29754], 95.00th=[32637], 00:32:42.851 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:32:42.851 | 99.99th=[44827] 00:32:42.851 write: IOPS=3325, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1006msec); 0 zone resets 00:32:42.851 slat (nsec): min=1900, max=23484k, avg=173516.04, stdev=1134361.65 00:32:42.851 clat (usec): min=579, max=61703, avg=21461.47, stdev=11714.93 00:32:42.851 lat (usec): min=2085, max=61713, avg=21634.98, stdev=11794.51 00:32:42.851 clat percentiles (usec): 00:32:42.851 | 1.00th=[ 5211], 5.00th=[ 6915], 10.00th=[ 9503], 20.00th=[10552], 00:32:42.851 | 30.00th=[14091], 40.00th=[17433], 50.00th=[20579], 60.00th=[21627], 00:32:42.851 | 70.00th=[23987], 80.00th=[28443], 90.00th=[39584], 95.00th=[45876], 00:32:42.851 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:32:42.851 | 99.99th=[61604] 00:32:42.851 bw ( KiB/s): min=12288, max=13448, per=18.21%, avg=12868.00, stdev=820.24, samples=2 00:32:42.851 iops : min= 3072, max= 3362, avg=3217.00, stdev=205.06, samples=2 00:32:42.851 lat (usec) : 500=0.02%, 750=0.02% 00:32:42.851 lat (msec) : 4=0.62%, 10=14.54%, 20=40.13%, 50=43.04%, 100=1.64% 00:32:42.851 cpu : usr=1.99%, sys=3.18%, ctx=338, majf=0, minf=1 00:32:42.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:42.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:42.851 issued rwts: total=3072,3345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:42.851 job1: (groupid=0, jobs=1): err= 0: pid=1920638: Tue Nov 19 10:59:49 2024 00:32:42.851 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:32:42.851 slat (nsec): min=1347, max=10384k, avg=82948.08, stdev=677909.83 00:32:42.851 clat (usec): min=3944, max=39500, avg=11326.80, stdev=3226.14 00:32:42.851 lat (usec): min=3955, max=39506, avg=11409.74, stdev=3265.53 00:32:42.851 clat percentiles (usec): 00:32:42.851 | 1.00th=[ 6128], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[ 9241], 00:32:42.851 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10945], 00:32:42.851 | 70.00th=[11600], 80.00th=[13304], 90.00th=[15664], 95.00th=[17695], 00:32:42.851 | 99.00th=[23462], 99.50th=[23462], 99.90th=[29492], 99.95th=[29492], 00:32:42.851 | 99.99th=[39584] 00:32:42.851 write: IOPS=5925, BW=23.1MiB/s (24.3MB/s)(23.3MiB/1006msec); 0 zone resets 00:32:42.851 slat (usec): min=2, max=22275, avg=79.54, stdev=684.77 00:32:42.851 clat (usec): min=701, max=39374, avg=10676.20, stdev=4746.38 00:32:42.851 lat (usec): min=711, max=39537, avg=10755.73, stdev=4786.86 00:32:42.851 clat percentiles (usec): 00:32:42.851 | 1.00th=[ 3261], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 7177], 00:32:42.851 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10290], 00:32:42.851 | 70.00th=[10945], 80.00th=[12649], 90.00th=[15533], 95.00th=[23462], 00:32:42.851 | 99.00th=[26346], 99.50th=[29754], 99.90th=[34866], 99.95th=[34866], 00:32:42.851 | 99.99th=[39584] 00:32:42.851 bw ( KiB/s): min=21408, max=25256, per=33.01%, avg=23332.00, stdev=2720.95, samples=2 00:32:42.851 iops : min= 5352, max= 6314, avg=5833.00, stdev=680.24, samples=2 00:32:42.851 lat (usec) : 750=0.03% 00:32:42.851 lat (msec) : 2=0.01%, 4=1.17%, 10=43.71%, 20=50.53%, 50=4.55% 00:32:42.851 cpu : usr=4.58%, sys=7.16%, ctx=377, majf=0, minf=1 00:32:42.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:42.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:42.851 issued rwts: total=5632,5961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:42.851 job2: (groupid=0, jobs=1): err= 0: pid=1920663: Tue Nov 19 10:59:49 2024 00:32:42.851 read: IOPS=3891, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1006msec) 00:32:42.851 slat (nsec): min=1387, max=10935k, avg=122151.23, stdev=677853.61 00:32:42.851 clat (usec): min=1266, max=31279, avg=15757.05, stdev=5136.94 00:32:42.851 lat (usec): min=6153, max=31300, avg=15879.20, stdev=5185.68 00:32:42.851 clat percentiles (usec): 00:32:42.851 | 1.00th=[ 8356], 5.00th=[10159], 10.00th=[10552], 20.00th=[11338], 00:32:42.851 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13698], 60.00th=[16712], 00:32:42.851 | 70.00th=[19006], 80.00th=[20841], 90.00th=[22938], 95.00th=[25560], 00:32:42.851 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29754], 99.95th=[30802], 00:32:42.851 | 99.99th=[31327] 00:32:42.851 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:32:42.851 slat (usec): min=2, max=10163, avg=118.55, stdev=563.00 00:32:42.851 clat (usec): min=8128, max=39916, avg=16065.00, stdev=6871.80 00:32:42.851 lat (usec): min=8135, max=39924, avg=16183.55, stdev=6921.13 00:32:42.851 clat percentiles (usec): 00:32:42.851 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11076], 00:32:42.851 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12256], 60.00th=[15926], 00:32:42.851 | 70.00th=[17433], 80.00th=[21365], 90.00th=[26346], 95.00th=[31851], 00:32:42.851 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40109], 99.95th=[40109], 00:32:42.851 | 99.99th=[40109] 00:32:42.851 bw ( KiB/s): min=16352, max=16416, per=23.18%, avg=16384.00, stdev=45.25, samples=2 00:32:42.851 iops : min= 4088, max= 4104, avg=4096.00, stdev=11.31, samples=2 00:32:42.851 lat (msec) : 2=0.01%, 10=5.64%, 20=71.29%, 50=23.06% 00:32:42.851 cpu : usr=2.29%, sys=7.06%, ctx=413, majf=0, minf=1 00:32:42.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:42.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:42.851 issued rwts: total=3915,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:42.851 job3: (groupid=0, jobs=1): err= 0: pid=1920673: Tue Nov 19 10:59:49 2024 00:32:42.851 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:32:42.851 slat (nsec): min=1059, max=13287k, avg=106594.66, stdev=842815.82 00:32:42.851 clat (usec): min=3471, max=31500, avg=14020.11, stdev=4540.54 00:32:42.851 lat (usec): min=3477, max=31504, avg=14126.70, stdev=4603.26 00:32:42.851 clat percentiles (usec): 00:32:42.851 | 1.00th=[ 6456], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[10814], 00:32:42.851 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[14222], 00:32:42.851 | 70.00th=[15139], 80.00th=[16319], 90.00th=[20579], 95.00th=[23725], 00:32:42.851 | 99.00th=[28181], 99.50th=[28705], 99.90th=[30016], 99.95th=[31589], 00:32:42.851 | 99.99th=[31589] 00:32:42.851 write: IOPS=4386, BW=17.1MiB/s (18.0MB/s)(17.3MiB/1009msec); 0 zone resets 00:32:42.851 slat (nsec): min=1817, max=13024k, avg=114817.45, stdev=708113.32 00:32:42.851 clat (usec): min=2689, max=54942, avg=15940.76, stdev=9404.17 00:32:42.851 lat (usec): min=2697, max=54947, avg=16055.58, stdev=9466.41 00:32:42.851 clat percentiles (usec): 00:32:42.851 | 1.00th=[ 4490], 5.00th=[ 6980], 10.00th=[ 8291], 20.00th=[ 9634], 00:32:42.851 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11731], 60.00th=[14222], 00:32:42.851 | 70.00th=[17171], 80.00th=[21365], 90.00th=[29492], 95.00th=[36439], 00:32:42.851 | 99.00th=[48497], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:32:42.851 | 99.99th=[54789] 00:32:42.851 bw ( KiB/s): min=17008, max=17384, per=24.33%, avg=17196.00, stdev=265.87, samples=2 00:32:42.851 iops : min= 4252, max= 4346, avg=4299.00, stdev=66.47, samples=2 00:32:42.851 lat (msec) : 4=0.42%, 10=17.27%, 20=64.09%, 50=17.82%, 100=0.39% 00:32:42.851 cpu : usr=2.88%, sys=4.66%, ctx=381, majf=0, minf=2 00:32:42.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:42.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:42.851 issued rwts: total=4096,4426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:42.852 00:32:42.852 Run status group 0 (all jobs): 00:32:42.852 READ: bw=64.7MiB/s (67.9MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-22.9MB/s), io=65.3MiB (68.5MB), run=1006-1009msec 00:32:42.852 WRITE: bw=69.0MiB/s (72.4MB/s), 13.0MiB/s-23.1MiB/s (13.6MB/s-24.3MB/s), io=69.6MiB (73.0MB), run=1006-1009msec 00:32:42.852 00:32:42.852 Disk stats (read/write): 00:32:42.852 nvme0n1: ios=2555/2647, merge=0/0, ticks=18384/27366, in_queue=45750, util=97.39% 00:32:42.852 nvme0n2: ios=4502/4608, merge=0/0, ticks=46050/45037, in_queue=91087, util=97.84% 00:32:42.852 nvme0n3: ios=3095/3519, merge=0/0, ticks=15848/19315, in_queue=35163, util=97.51% 00:32:42.852 nvme0n4: ios=3072/3411, merge=0/0, ticks=40668/50775, in_queue=91443, util=89.07% 00:32:42.852 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:42.852 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1920768 00:32:42.852 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:42.852 10:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:42.852 [global] 00:32:42.852 thread=1 00:32:42.852 invalidate=1 00:32:42.852 rw=read 00:32:42.852 time_based=1 00:32:42.852 runtime=10 00:32:42.852 ioengine=libaio 00:32:42.852 direct=1 00:32:42.852 bs=4096 00:32:42.852 iodepth=1 00:32:42.852 norandommap=1 00:32:42.852 numjobs=1 00:32:42.852 00:32:42.852 [job0] 00:32:42.852 filename=/dev/nvme0n1 00:32:42.852 [job1] 00:32:42.852 filename=/dev/nvme0n2 00:32:42.852 [job2] 00:32:42.852 filename=/dev/nvme0n3 00:32:42.852 [job3] 00:32:42.852 filename=/dev/nvme0n4 00:32:42.852 Could not set queue depth (nvme0n1) 00:32:42.852 Could not set queue depth (nvme0n2) 00:32:42.852 Could not set queue depth (nvme0n3) 00:32:42.852 Could not set queue depth (nvme0n4) 00:32:42.852 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.852 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.852 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.852 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.852 fio-3.35 00:32:42.852 Starting 4 threads 00:32:46.139 10:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:46.139 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:46.139 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:32:46.139 fio: pid=1921100, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:46.139 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:32:46.139 fio: pid=1921095, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:46.139 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:46.139 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:46.139 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50913280, buflen=4096 00:32:46.139 fio: pid=1921069, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:46.139 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:46.139 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:46.398 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:46.398 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:46.398 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1638400, buflen=4096 00:32:46.398 fio: pid=1921080, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:46.657 00:32:46.657 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1921069: Tue Nov 19 10:59:53 2024 00:32:46.657 read: IOPS=3940, BW=15.4MiB/s (16.1MB/s)(48.6MiB/3155msec) 00:32:46.657 slat (usec): min=6, max=29103, avg=11.97, stdev=296.29 00:32:46.657 clat (usec): min=175, max=21022, avg=237.40, stdev=187.97 00:32:46.658 lat (usec): min=191, max=29411, avg=249.37, stdev=351.71 00:32:46.658 clat percentiles (usec): 00:32:46.658 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 210], 00:32:46.658 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:32:46.658 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 260], 00:32:46.658 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 437], 99.95th=[ 465], 00:32:46.658 | 99.99th=[ 611] 00:32:46.658 bw ( KiB/s): min=14535, max=18560, per=100.00%, avg=15990.50, stdev=1361.93, samples=6 00:32:46.658 iops : min= 3633, max= 4640, avg=3997.50, stdev=340.64, samples=6 00:32:46.658 lat (usec) : 250=72.75%, 500=27.20%, 750=0.03% 00:32:46.658 lat (msec) : 50=0.01% 00:32:46.658 cpu : usr=1.78%, sys=6.82%, ctx=12433, majf=0, minf=1 00:32:46.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.658 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.658 issued rwts: total=12431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:46.658 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1921080: Tue Nov 19 10:59:53 2024 00:32:46.658 read: IOPS=118, BW=473KiB/s (484kB/s)(1600KiB/3382msec) 00:32:46.658 slat (nsec): min=6669, max=66478, avg=11334.50, stdev=8300.44 00:32:46.658 clat (usec): min=187, max=41998, avg=8387.68, stdev=16369.86 00:32:46.658 lat (usec): min=195, max=42024, avg=8398.97, stdev=16376.96 00:32:46.658 clat percentiles (usec): 00:32:46.658 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:32:46.658 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:32:46.658 | 70.00th=[ 223], 80.00th=[ 375], 90.00th=[41157], 95.00th=[41157], 00:32:46.658 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:46.658 | 99.99th=[42206] 00:32:46.658 bw ( KiB/s): min= 93, max= 2016, per=3.39%, avg=520.83, stdev=771.17, samples=6 00:32:46.658 iops : min= 23, max= 504, avg=130.17, stdev=192.82, samples=6 00:32:46.658 lat (usec) : 250=79.55%, 500=0.25% 00:32:46.658 lat (msec) : 50=19.95% 00:32:46.658 cpu : usr=0.06%, sys=0.18%, ctx=404, majf=0, minf=2 00:32:46.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.658 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.658 issued rwts: total=401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:46.658 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1921095: Tue Nov 19 10:59:53 2024 00:32:46.658 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2937msec) 00:32:46.658 slat (nsec): min=9958, max=74777, avg=17522.77, stdev=9095.01 00:32:46.658 clat (usec): min=413, max=42065, avg=40471.13, stdev=4793.56 00:32:46.658 lat (usec): min=445, max=42078, avg=40488.75, stdev=4791.82 00:32:46.658 clat percentiles (usec): 00:32:46.658 | 1.00th=[ 412], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:46.658 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:46.658 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:46.658 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:46.658 | 99.99th=[42206] 00:32:46.658 bw ( KiB/s): min= 96, max= 104, per=0.63%, avg=97.60, stdev= 3.58, samples=5 00:32:46.658 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:32:46.658 lat (usec) : 500=1.37% 00:32:46.658 lat (msec) : 50=97.26% 00:32:46.658 cpu : usr=0.07%, sys=0.00%, ctx=74, majf=0, minf=2 00:32:46.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.658 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.658 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:46.658 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1921100: Tue Nov 19 10:59:53 2024 00:32:46.658 read: IOPS=24, BW=98.1KiB/s (100kB/s)(268KiB/2731msec) 00:32:46.658 slat (nsec): min=9471, max=35240, avg=16988.28, stdev=4931.65 00:32:46.658 clat (usec): min=501, max=43935, avg=40418.39, stdev=4963.96 00:32:46.658 lat (usec): min=537, max=43962, avg=40435.24, stdev=4961.77 00:32:46.658 clat percentiles (usec): 00:32:46.658 | 1.00th=[ 502], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:46.658 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:46.658 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:46.658 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:32:46.658 | 99.99th=[43779] 00:32:46.658 bw ( KiB/s): min= 96, max= 104, per=0.65%, avg=99.20, stdev= 4.38, samples=5 00:32:46.658 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:32:46.658 lat (usec) : 750=1.47% 00:32:46.658 lat (msec) : 50=97.06% 00:32:46.658 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:32:46.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.658 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.658 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:46.658 00:32:46.658 Run status group 0 (all jobs): 00:32:46.658 READ: bw=15.0MiB/s (15.7MB/s), 98.1KiB/s-15.4MiB/s (100kB/s-16.1MB/s), io=50.7MiB (53.1MB), run=2731-3382msec 00:32:46.658 00:32:46.658 Disk stats (read/write): 00:32:46.658 nvme0n1: ios=12360/0, merge=0/0, ticks=2807/0, in_queue=2807, util=94.39% 00:32:46.658 nvme0n2: ios=433/0, merge=0/0, ticks=4225/0, in_queue=4225, util=100.00% 00:32:46.658 nvme0n3: ios=70/0, merge=0/0, ticks=2833/0, in_queue=2833, util=96.52% 00:32:46.658 nvme0n4: ios=64/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.48% 00:32:46.658 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:46.658 10:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:46.917 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:46.917 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:47.176 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:47.176 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:47.176 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:47.176 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:47.435 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:47.435 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1920768 00:32:47.435 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:47.435 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:47.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:47.694 nvmf hotplug test: fio failed as expected 00:32:47.694 10:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.953 rmmod nvme_tcp 00:32:47.953 rmmod nvme_fabrics 00:32:47.953 rmmod nvme_keyring 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1918283 ']' 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1918283 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1918283 ']' 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1918283 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1918283 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.953 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1918283' 00:32:47.953 killing process with pid 1918283 00:32:47.954 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1918283 00:32:47.954 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1918283 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.213 10:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.117 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.117 00:32:50.117 real 0m25.930s 00:32:50.117 user 1m30.991s 00:32:50.117 sys 0m11.484s 00:32:50.117 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.117 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.117 ************************************ 00:32:50.117 END TEST nvmf_fio_target 00:32:50.117 ************************************ 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:50.376 ************************************ 00:32:50.376 START TEST nvmf_bdevio 00:32:50.376 ************************************ 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:50.376 * Looking for test storage... 00:32:50.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.376 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:50.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.377 --rc genhtml_branch_coverage=1 00:32:50.377 --rc genhtml_function_coverage=1 00:32:50.377 --rc genhtml_legend=1 00:32:50.377 --rc geninfo_all_blocks=1 00:32:50.377 --rc geninfo_unexecuted_blocks=1 00:32:50.377 00:32:50.377 ' 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:50.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.377 --rc genhtml_branch_coverage=1 00:32:50.377 --rc genhtml_function_coverage=1 00:32:50.377 --rc genhtml_legend=1 00:32:50.377 --rc geninfo_all_blocks=1 00:32:50.377 --rc geninfo_unexecuted_blocks=1 00:32:50.377 00:32:50.377 ' 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:50.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.377 --rc genhtml_branch_coverage=1 00:32:50.377 --rc genhtml_function_coverage=1 00:32:50.377 --rc genhtml_legend=1 00:32:50.377 --rc geninfo_all_blocks=1 00:32:50.377 --rc geninfo_unexecuted_blocks=1 00:32:50.377 00:32:50.377 ' 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:50.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.377 --rc genhtml_branch_coverage=1 00:32:50.377 --rc genhtml_function_coverage=1 00:32:50.377 --rc genhtml_legend=1 00:32:50.377 --rc geninfo_all_blocks=1 00:32:50.377 --rc geninfo_unexecuted_blocks=1 00:32:50.377 00:32:50.377 ' 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.377 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.637 10:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:57.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:57.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:57.208 Found net devices under 0000:86:00.0: cvl_0_0 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:57.208 Found net devices under 0000:86:00.1: cvl_0_1 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.208 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:32:57.209 00:32:57.209 --- 10.0.0.2 ping statistics --- 00:32:57.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.209 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:32:57.209 00:32:57.209 --- 10.0.0.1 ping statistics --- 00:32:57.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.209 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1925486 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1925486 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1925486 ']' 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.209 11:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 [2024-11-19 11:00:03.824883] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:57.209 [2024-11-19 11:00:03.825774] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:57.209 [2024-11-19 11:00:03.825809] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.209 [2024-11-19 11:00:03.904287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:57.209 [2024-11-19 11:00:03.947325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.209 [2024-11-19 11:00:03.947365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.209 [2024-11-19 11:00:03.947372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.209 [2024-11-19 11:00:03.947379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.209 [2024-11-19 11:00:03.947384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.209 [2024-11-19 11:00:03.948835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:57.209 [2024-11-19 11:00:03.948942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:57.209 [2024-11-19 11:00:03.949049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:57.209 [2024-11-19 11:00:03.949050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:57.209 [2024-11-19 11:00:04.015442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.209 [2024-11-19 11:00:04.016575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:57.209 [2024-11-19 11:00:04.016585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:57.209 [2024-11-19 11:00:04.016977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:57.209 [2024-11-19 11:00:04.017018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 [2024-11-19 11:00:04.081726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 Malloc0 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.209 [2024-11-19 11:00:04.165970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:57.209 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:57.209 { 00:32:57.209 "params": { 00:32:57.209 "name": "Nvme$subsystem", 00:32:57.209 "trtype": "$TEST_TRANSPORT", 00:32:57.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.209 "adrfam": "ipv4", 00:32:57.209 "trsvcid": "$NVMF_PORT", 00:32:57.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.210 "hdgst": ${hdgst:-false}, 00:32:57.210 "ddgst": ${ddgst:-false} 00:32:57.210 }, 00:32:57.210 "method": "bdev_nvme_attach_controller" 00:32:57.210 } 00:32:57.210 EOF 00:32:57.210 )") 00:32:57.210 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:57.210 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:57.210 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:57.210 11:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:57.210 "params": { 00:32:57.210 "name": "Nvme1", 00:32:57.210 "trtype": "tcp", 00:32:57.210 "traddr": "10.0.0.2", 00:32:57.210 "adrfam": "ipv4", 00:32:57.210 "trsvcid": "4420", 00:32:57.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:57.210 "hdgst": false, 00:32:57.210 "ddgst": false 00:32:57.210 }, 00:32:57.210 "method": "bdev_nvme_attach_controller" 00:32:57.210 }' 00:32:57.210 [2024-11-19 11:00:04.214580] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:32:57.210 [2024-11-19 11:00:04.214624] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925513 ] 00:32:57.210 [2024-11-19 11:00:04.288674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:57.210 [2024-11-19 11:00:04.332681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.210 [2024-11-19 11:00:04.332809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.210 [2024-11-19 11:00:04.332810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:57.210 I/O targets: 00:32:57.210 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:57.210 00:32:57.210 00:32:57.210 CUnit - A unit testing framework for C - Version 2.1-3 00:32:57.210 http://cunit.sourceforge.net/ 00:32:57.210 00:32:57.210 00:32:57.210 Suite: bdevio tests on: Nvme1n1 00:32:57.469 Test: blockdev write read block ...passed 00:32:57.469 Test: blockdev write zeroes read block ...passed 00:32:57.469 Test: blockdev write zeroes read no split ...passed 00:32:57.469 Test: blockdev write zeroes read split ...passed 00:32:57.469 Test: blockdev write zeroes read split partial ...passed 00:32:57.469 Test: blockdev reset ...[2024-11-19 11:00:04.755573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:57.469 [2024-11-19 11:00:04.755638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbb340 (9): Bad file descriptor 00:32:57.469 [2024-11-19 11:00:04.800905] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:57.469 passed 00:32:57.469 Test: blockdev write read 8 blocks ...passed 00:32:57.469 Test: blockdev write read size > 128k ...passed 00:32:57.469 Test: blockdev write read invalid size ...passed 00:32:57.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:57.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:57.469 Test: blockdev write read max offset ...passed 00:32:57.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:57.728 Test: blockdev writev readv 8 blocks ...passed 00:32:57.728 Test: blockdev writev readv 30 x 1block ...passed 00:32:57.728 Test: blockdev writev readv block ...passed 00:32:57.728 Test: blockdev writev readv size > 128k ...passed 00:32:57.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:57.728 Test: blockdev comparev and writev ...[2024-11-19 11:00:04.969787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:57.728 [2024-11-19 11:00:04.969816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.728 [2024-11-19 11:00:04.969830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:57.728 [2024-11-19 11:00:04.969838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.728 [2024-11-19 11:00:04.970131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:57.728 [2024-11-19 11:00:04.970143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:57.728 [2024-11-19 11:00:04.970155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:57.728 [2024-11-19 11:00:04.970162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:57.728 [2024-11-19 11:00:04.970454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:57.728 [2024-11-19 11:00:04.970465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:57.728 [2024-11-19 11:00:04.970477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:57.728 [2024-11-19 11:00:04.970484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:57.728 [2024-11-19 11:00:04.970764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:57.728 [2024-11-19 11:00:04.970775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:57.728 [2024-11-19 11:00:04.970787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:57.728 [2024-11-19 11:00:04.970794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:57.728 passed 00:32:57.728 Test: blockdev nvme passthru rw ...passed 00:32:57.729 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:00:05.053322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:57.729 [2024-11-19 11:00:05.053338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:57.729 [2024-11-19 11:00:05.053451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:57.729 [2024-11-19 11:00:05.053460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:57.729 [2024-11-19 11:00:05.053565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:57.729 [2024-11-19 11:00:05.053574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:57.729 [2024-11-19 11:00:05.053677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:57.729 [2024-11-19 11:00:05.053687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:57.729 passed 00:32:57.729 Test: blockdev nvme admin passthru ...passed 00:32:57.729 Test: blockdev copy ...passed 00:32:57.729 00:32:57.729 Run Summary: Type Total Ran Passed Failed Inactive 00:32:57.729 suites 1 1 n/a 0 0 00:32:57.729 tests 23 23 23 0 0 00:32:57.729 asserts 152 152 152 0 n/a 00:32:57.729 00:32:57.729 Elapsed time = 0.928 seconds 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.988 rmmod nvme_tcp 00:32:57.988 rmmod nvme_fabrics 00:32:57.988 rmmod nvme_keyring 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1925486 ']' 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1925486 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1925486 ']' 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1925486 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1925486 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1925486' 00:32:57.988 killing process with pid 1925486 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1925486 00:32:57.988 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1925486 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:58.248 11:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.786 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:00.786 00:33:00.786 real 0m10.014s 00:33:00.786 user 0m8.907s 00:33:00.786 sys 0m5.315s 00:33:00.787 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:00.787 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:00.787 ************************************ 00:33:00.787 END TEST nvmf_bdevio 00:33:00.787 ************************************ 00:33:00.787 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:00.787 00:33:00.787 real 4m31.304s 00:33:00.787 user 9m2.723s 00:33:00.787 sys 1m51.536s 00:33:00.787 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:00.787 11:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:00.787 ************************************ 00:33:00.787 END TEST nvmf_target_core_interrupt_mode 00:33:00.787 ************************************ 00:33:00.787 11:00:07 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:00.787 11:00:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:00.787 11:00:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:00.787 11:00:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.787 ************************************ 00:33:00.787 START TEST nvmf_interrupt 00:33:00.787 ************************************ 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:00.787 * Looking for test storage... 00:33:00.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.787 --rc genhtml_branch_coverage=1 00:33:00.787 --rc genhtml_function_coverage=1 00:33:00.787 --rc genhtml_legend=1 00:33:00.787 --rc geninfo_all_blocks=1 00:33:00.787 --rc geninfo_unexecuted_blocks=1 00:33:00.787 00:33:00.787 ' 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.787 --rc genhtml_branch_coverage=1 00:33:00.787 --rc genhtml_function_coverage=1 00:33:00.787 --rc genhtml_legend=1 00:33:00.787 --rc geninfo_all_blocks=1 00:33:00.787 --rc geninfo_unexecuted_blocks=1 00:33:00.787 00:33:00.787 ' 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.787 --rc genhtml_branch_coverage=1 00:33:00.787 --rc genhtml_function_coverage=1 00:33:00.787 --rc genhtml_legend=1 00:33:00.787 --rc geninfo_all_blocks=1 00:33:00.787 --rc geninfo_unexecuted_blocks=1 00:33:00.787 00:33:00.787 ' 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.787 --rc genhtml_branch_coverage=1 00:33:00.787 --rc genhtml_function_coverage=1 00:33:00.787 --rc genhtml_legend=1 00:33:00.787 --rc geninfo_all_blocks=1 00:33:00.787 --rc geninfo_unexecuted_blocks=1 00:33:00.787 00:33:00.787 ' 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.787 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:00.788 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:07.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:07.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:07.361 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:07.362 Found net devices under 0000:86:00.0: cvl_0_0 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:07.362 Found net devices under 0000:86:00.1: cvl_0_1 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:33:07.362 00:33:07.362 --- 10.0.0.2 ping statistics --- 00:33:07.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.362 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:33:07.362 00:33:07.362 --- 10.0.0.1 ping statistics --- 00:33:07.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.362 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1929655 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1929655 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1929655 ']' 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.362 11:00:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.362 [2024-11-19 11:00:13.872465] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:07.362 [2024-11-19 11:00:13.873487] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:07.362 [2024-11-19 11:00:13.873527] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.362 [2024-11-19 11:00:13.953192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:07.362 [2024-11-19 11:00:13.994407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.362 [2024-11-19 11:00:13.994442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.362 [2024-11-19 11:00:13.994450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.362 [2024-11-19 11:00:13.994456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.362 [2024-11-19 11:00:13.994461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.362 [2024-11-19 11:00:13.995628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.362 [2024-11-19 11:00:13.995628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.362 [2024-11-19 11:00:14.062658] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:07.362 [2024-11-19 11:00:14.063246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:07.362 [2024-11-19 11:00:14.063474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:07.362 5000+0 records in 00:33:07.362 5000+0 records out 00:33:07.362 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184157 s, 556 MB/s 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:07.362 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.363 AIO0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.363 [2024-11-19 11:00:14.196411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:07.363 [2024-11-19 11:00:14.236726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1929655 0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1929655 0 idle 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1929655 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1929655 -w 256 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1929655 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1929655 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1929655 1 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1929655 1 idle 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1929655 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1929655 -w 256 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1929671 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1929671 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1929708 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1929655 0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1929655 0 busy 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1929655 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1929655 -w 256 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1929655 root 20 0 128.2g 47616 34560 R 66.7 0.0 0:00.35 reactor_0' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1929655 root 20 0 128.2g 47616 34560 R 66.7 0.0 0:00.35 reactor_0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1929655 1 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1929655 1 busy 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1929655 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1929655 -w 256 00:33:07.363 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:07.622 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1929671 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.23 reactor_1' 00:33:07.622 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1929671 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.23 reactor_1 00:33:07.622 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:07.623 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:07.623 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:07.623 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:07.623 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:07.623 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:07.623 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:07.623 11:00:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:07.623 11:00:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1929708 00:33:17.608 Initializing NVMe Controllers 00:33:17.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:17.608 Controller IO queue size 256, less than required. 00:33:17.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:17.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:17.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:17.608 Initialization complete. Launching workers. 00:33:17.608 ======================================================== 00:33:17.608 Latency(us) 00:33:17.608 Device Information : IOPS MiB/s Average min max 00:33:17.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15971.20 62.39 16038.40 3028.81 57026.64 00:33:17.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16103.90 62.91 15901.53 7685.85 25053.75 00:33:17.608 ======================================================== 00:33:17.608 Total : 32075.09 125.29 15969.68 3028.81 57026.64 00:33:17.608 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1929655 0 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1929655 0 idle 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1929655 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1929655 -w 256 00:33:17.608 11:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1929655 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0' 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1929655 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1929655 1 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1929655 1 idle 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1929655 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1929655 -w 256 00:33:17.608 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1929671 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1929671 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:17.867 11:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:18.434 11:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:18.434 11:00:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:18.434 11:00:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:18.434 11:00:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:18.434 11:00:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1929655 0 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1929655 0 idle 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1929655 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1929655 -w 256 00:33:20.339 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1929655 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.51 reactor_0' 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1929655 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.51 reactor_0 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1929655 1 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1929655 1 idle 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1929655 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1929655 -w 256 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1929671 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1' 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1929671 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:20.599 11:00:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:20.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:20.859 11:00:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:20.859 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:20.859 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:20.859 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:20.859 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:20.859 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.117 rmmod nvme_tcp 00:33:21.117 rmmod nvme_fabrics 00:33:21.117 rmmod nvme_keyring 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1929655 ']' 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1929655 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1929655 ']' 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1929655 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1929655 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1929655' 00:33:21.117 killing process with pid 1929655 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1929655 00:33:21.117 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1929655 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:21.375 11:00:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.285 11:00:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.285 00:33:23.285 real 0m22.971s 00:33:23.285 user 0m39.426s 00:33:23.285 sys 0m8.724s 00:33:23.285 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.285 11:00:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:23.285 ************************************ 00:33:23.285 END TEST nvmf_interrupt 00:33:23.285 ************************************ 00:33:23.544 00:33:23.544 real 27m24.894s 00:33:23.544 user 56m25.922s 00:33:23.544 sys 9m21.159s 00:33:23.544 11:00:30 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.544 11:00:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.544 ************************************ 00:33:23.544 END TEST nvmf_tcp 00:33:23.544 ************************************ 00:33:23.544 11:00:30 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:23.544 11:00:30 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:23.544 11:00:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:23.544 11:00:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.544 11:00:30 -- common/autotest_common.sh@10 -- # set +x 00:33:23.544 ************************************ 00:33:23.544 START TEST spdkcli_nvmf_tcp 00:33:23.544 ************************************ 00:33:23.544 11:00:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:23.544 * Looking for test storage... 00:33:23.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:23.544 11:00:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:23.544 11:00:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:23.544 11:00:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:23.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.804 --rc genhtml_branch_coverage=1 00:33:23.804 --rc genhtml_function_coverage=1 00:33:23.804 --rc genhtml_legend=1 00:33:23.804 --rc geninfo_all_blocks=1 00:33:23.804 --rc geninfo_unexecuted_blocks=1 00:33:23.804 00:33:23.804 ' 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:23.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.804 --rc genhtml_branch_coverage=1 00:33:23.804 --rc genhtml_function_coverage=1 00:33:23.804 --rc genhtml_legend=1 00:33:23.804 --rc geninfo_all_blocks=1 00:33:23.804 --rc geninfo_unexecuted_blocks=1 00:33:23.804 00:33:23.804 ' 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:23.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.804 --rc genhtml_branch_coverage=1 00:33:23.804 --rc genhtml_function_coverage=1 00:33:23.804 --rc genhtml_legend=1 00:33:23.804 --rc geninfo_all_blocks=1 00:33:23.804 --rc geninfo_unexecuted_blocks=1 00:33:23.804 00:33:23.804 ' 00:33:23.804 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:23.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.805 --rc genhtml_branch_coverage=1 00:33:23.805 --rc genhtml_function_coverage=1 00:33:23.805 --rc genhtml_legend=1 00:33:23.805 --rc geninfo_all_blocks=1 00:33:23.805 --rc geninfo_unexecuted_blocks=1 00:33:23.805 00:33:23.805 ' 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:23.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1932463 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1932463 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1932463 ']' 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.805 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.805 [2024-11-19 11:00:31.106548] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:23.805 [2024-11-19 11:00:31.106597] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932463 ] 00:33:23.805 [2024-11-19 11:00:31.179362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:23.805 [2024-11-19 11:00:31.223296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.805 [2024-11-19 11:00:31.223298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:24.065 11:00:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:24.065 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:24.065 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:24.065 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:24.065 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:24.065 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:24.065 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:24.065 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:24.065 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:24.065 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:24.065 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:24.065 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:24.065 ' 00:33:26.600 [2024-11-19 11:00:34.045467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.978 [2024-11-19 11:00:35.381934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:30.513 [2024-11-19 11:00:37.849534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:33.046 [2024-11-19 11:00:40.028385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:34.422 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:34.422 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:34.422 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:34.422 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:34.422 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:34.422 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:34.422 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:34.422 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:34.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:34.423 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:34.423 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:34.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:34.423 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:34.423 11:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:34.423 11:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.423 11:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.423 11:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:34.423 11:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.423 11:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.423 11:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:34.423 11:00:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:34.990 11:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:34.990 11:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:34.990 11:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:34.991 11:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.991 11:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.991 11:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:34.991 11:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.991 11:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.991 11:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:34.991 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:34.991 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:34.991 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:34.991 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:34.991 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:34.991 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:34.991 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:34.991 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:34.991 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:34.991 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:34.991 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:34.991 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:34.991 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:34.991 ' 00:33:40.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:40.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:40.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:40.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:40.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:40.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:40.386 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:40.386 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:40.386 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:40.386 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:40.386 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:40.386 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:40.386 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:40.386 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1932463 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1932463 ']' 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1932463 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1932463 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1932463' 00:33:40.646 killing process with pid 1932463 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1932463 00:33:40.646 11:00:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1932463 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1932463 ']' 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1932463 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1932463 ']' 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1932463 00:33:40.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1932463) - No such process 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1932463 is not found' 00:33:40.906 Process with pid 1932463 is not found 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:40.906 00:33:40.906 real 0m17.316s 00:33:40.906 user 0m38.175s 00:33:40.906 sys 0m0.807s 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.906 11:00:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:40.906 ************************************ 00:33:40.906 END TEST spdkcli_nvmf_tcp 00:33:40.906 ************************************ 00:33:40.906 11:00:48 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:40.906 11:00:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:40.906 11:00:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.906 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:33:40.906 ************************************ 00:33:40.906 START TEST nvmf_identify_passthru 00:33:40.906 ************************************ 00:33:40.906 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:40.906 * Looking for test storage... 00:33:40.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:40.906 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:40.906 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:40.906 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:41.166 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.166 11:00:48 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:41.167 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.167 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:41.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.167 --rc genhtml_branch_coverage=1 00:33:41.167 --rc genhtml_function_coverage=1 00:33:41.167 --rc genhtml_legend=1 00:33:41.167 --rc geninfo_all_blocks=1 00:33:41.167 --rc geninfo_unexecuted_blocks=1 00:33:41.167 00:33:41.167 ' 00:33:41.167 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:41.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.167 --rc genhtml_branch_coverage=1 00:33:41.167 --rc genhtml_function_coverage=1 00:33:41.167 --rc genhtml_legend=1 00:33:41.167 --rc geninfo_all_blocks=1 00:33:41.167 --rc geninfo_unexecuted_blocks=1 00:33:41.167 00:33:41.167 ' 00:33:41.167 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:41.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.167 --rc genhtml_branch_coverage=1 00:33:41.167 --rc genhtml_function_coverage=1 00:33:41.167 --rc genhtml_legend=1 00:33:41.167 --rc geninfo_all_blocks=1 00:33:41.167 --rc geninfo_unexecuted_blocks=1 00:33:41.167 00:33:41.167 ' 00:33:41.167 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:41.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.167 --rc genhtml_branch_coverage=1 00:33:41.167 --rc genhtml_function_coverage=1 00:33:41.167 --rc genhtml_legend=1 00:33:41.167 --rc geninfo_all_blocks=1 00:33:41.167 --rc geninfo_unexecuted_blocks=1 00:33:41.167 00:33:41.167 ' 00:33:41.167 11:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.167 11:00:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.167 11:00:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.167 11:00:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.167 11:00:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:41.167 11:00:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:41.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.167 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.167 11:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.167 11:00:48 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.167 11:00:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.167 11:00:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.168 11:00:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.168 11:00:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:41.168 11:00:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.168 11:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.168 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:41.168 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:41.168 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.168 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.746 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:47.747 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:47.747 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:47.747 Found net devices under 0000:86:00.0: cvl_0_0 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:47.747 Found net devices under 0000:86:00.1: cvl_0_1 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:33:47.747 00:33:47.747 --- 10.0.0.2 ping statistics --- 00:33:47.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.747 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:33:47.747 00:33:47.747 --- 10.0.0.1 ping statistics --- 00:33:47.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.747 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.747 11:00:54 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.747 11:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.747 11:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:47.747 11:00:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:47.747 11:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:47.747 11:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:47.747 11:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:47.747 11:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:47.747 11:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:51.942 11:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:51.942 11:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:51.942 11:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:51.943 11:00:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:56.138 11:01:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:56.138 11:01:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.138 11:01:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.138 11:01:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1939653 00:33:56.138 11:01:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:56.138 11:01:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:56.138 11:01:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1939653 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1939653 ']' 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.138 [2024-11-19 11:01:02.842818] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:33:56.138 [2024-11-19 11:01:02.842866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.138 [2024-11-19 11:01:02.922476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:56.138 [2024-11-19 11:01:02.968382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.138 [2024-11-19 11:01:02.968418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.138 [2024-11-19 11:01:02.968426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.138 [2024-11-19 11:01:02.968432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.138 [2024-11-19 11:01:02.968437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.138 [2024-11-19 11:01:02.970023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.138 [2024-11-19 11:01:02.970051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.138 [2024-11-19 11:01:02.970156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.138 [2024-11-19 11:01:02.970158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:56.138 11:01:02 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.138 11:01:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.138 INFO: Log level set to 20 00:33:56.138 INFO: Requests: 00:33:56.138 { 00:33:56.138 "jsonrpc": "2.0", 00:33:56.138 "method": "nvmf_set_config", 00:33:56.138 "id": 1, 00:33:56.138 "params": { 00:33:56.138 "admin_cmd_passthru": { 00:33:56.138 "identify_ctrlr": true 00:33:56.138 } 00:33:56.138 } 00:33:56.138 } 00:33:56.138 00:33:56.138 INFO: response: 00:33:56.138 { 00:33:56.138 "jsonrpc": "2.0", 00:33:56.138 "id": 1, 00:33:56.138 "result": true 00:33:56.138 } 00:33:56.138 00:33:56.138 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.138 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:56.138 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.138 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.138 INFO: Setting log level to 20 00:33:56.138 INFO: Setting log level to 20 00:33:56.138 INFO: Log level set to 20 00:33:56.138 INFO: Log level set to 20 00:33:56.138 INFO: Requests: 00:33:56.138 { 00:33:56.138 "jsonrpc": "2.0", 00:33:56.138 "method": "framework_start_init", 00:33:56.138 "id": 1 00:33:56.138 } 00:33:56.138 00:33:56.138 INFO: Requests: 00:33:56.138 { 00:33:56.138 "jsonrpc": "2.0", 00:33:56.138 "method": "framework_start_init", 00:33:56.138 "id": 1 00:33:56.138 } 00:33:56.138 00:33:56.138 [2024-11-19 11:01:03.082751] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:56.138 INFO: response: 00:33:56.139 { 00:33:56.139 "jsonrpc": "2.0", 00:33:56.139 "id": 1, 00:33:56.139 "result": true 00:33:56.139 } 00:33:56.139 00:33:56.139 INFO: response: 00:33:56.139 { 00:33:56.139 "jsonrpc": "2.0", 00:33:56.139 "id": 1, 00:33:56.139 "result": true 00:33:56.139 } 00:33:56.139 00:33:56.139 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.139 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:56.139 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.139 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.139 INFO: Setting log level to 40 00:33:56.139 INFO: Setting log level to 40 00:33:56.139 INFO: Setting log level to 40 00:33:56.139 [2024-11-19 11:01:03.096132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.139 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.139 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:56.139 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.139 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.139 11:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:56.139 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.139 11:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:58.677 Nvme0n1 00:33:58.677 11:01:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.677 11:01:05 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:58.677 11:01:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.677 11:01:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:58.677 11:01:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.677 11:01:05 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:58.677 11:01:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.677 11:01:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:58.677 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.677 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.677 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.677 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:58.677 [2024-11-19 11:01:06.005091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.677 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.677 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:58.677 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.677 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:58.677 [ 00:33:58.677 { 00:33:58.677 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:58.677 "subtype": "Discovery", 00:33:58.677 "listen_addresses": [], 00:33:58.677 "allow_any_host": true, 00:33:58.677 "hosts": [] 00:33:58.677 }, 00:33:58.677 { 00:33:58.677 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:58.677 "subtype": "NVMe", 00:33:58.677 "listen_addresses": [ 00:33:58.677 { 00:33:58.677 "trtype": "TCP", 00:33:58.677 "adrfam": "IPv4", 00:33:58.677 "traddr": "10.0.0.2", 00:33:58.677 "trsvcid": "4420" 00:33:58.677 } 00:33:58.677 ], 00:33:58.677 "allow_any_host": true, 00:33:58.677 "hosts": [], 00:33:58.677 "serial_number": "SPDK00000000000001", 00:33:58.677 "model_number": "SPDK bdev Controller", 00:33:58.677 "max_namespaces": 1, 00:33:58.677 "min_cntlid": 1, 00:33:58.677 "max_cntlid": 65519, 00:33:58.677 "namespaces": [ 00:33:58.677 { 00:33:58.677 "nsid": 1, 00:33:58.677 "bdev_name": "Nvme0n1", 00:33:58.677 "name": "Nvme0n1", 00:33:58.677 "nguid": "E41D7AD136A847EF8259FD8482C22BD5", 00:33:58.677 "uuid": "e41d7ad1-36a8-47ef-8259-fd8482c22bd5" 00:33:58.677 } 00:33:58.677 ] 00:33:58.677 } 00:33:58.677 ] 00:33:58.677 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.677 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:58.677 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:58.677 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:58.937 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:58.937 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:58.937 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:58.937 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:59.198 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:59.198 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:59.198 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:59.198 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.198 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:59.198 11:01:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:59.198 rmmod nvme_tcp 00:33:59.198 rmmod nvme_fabrics 00:33:59.198 rmmod nvme_keyring 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1939653 ']' 00:33:59.198 11:01:06 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1939653 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1939653 ']' 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1939653 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1939653 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1939653' 00:33:59.198 killing process with pid 1939653 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1939653 00:33:59.198 11:01:06 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1939653 00:34:00.577 11:01:07 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:00.577 11:01:07 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:00.577 11:01:07 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:00.577 11:01:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:00.577 11:01:07 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:00.577 11:01:07 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:00.577 11:01:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:00.577 11:01:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:00.577 11:01:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:00.577 11:01:08 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.577 11:01:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:00.577 11:01:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.113 11:01:10 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:03.113 00:34:03.113 real 0m21.846s 00:34:03.113 user 0m26.717s 00:34:03.113 sys 0m6.258s 00:34:03.113 11:01:10 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.113 11:01:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.113 ************************************ 00:34:03.113 END TEST nvmf_identify_passthru 00:34:03.113 ************************************ 00:34:03.113 11:01:10 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:03.113 11:01:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:03.113 11:01:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:03.113 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:34:03.113 ************************************ 00:34:03.113 START TEST nvmf_dif 00:34:03.113 ************************************ 00:34:03.113 11:01:10 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:03.113 * Looking for test storage... 00:34:03.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:03.113 11:01:10 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:03.113 11:01:10 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:34:03.113 11:01:10 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:03.113 11:01:10 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:03.113 11:01:10 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:03.114 11:01:10 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.114 11:01:10 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:03.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.114 --rc genhtml_branch_coverage=1 00:34:03.114 --rc genhtml_function_coverage=1 00:34:03.114 --rc genhtml_legend=1 00:34:03.114 --rc geninfo_all_blocks=1 00:34:03.114 --rc geninfo_unexecuted_blocks=1 00:34:03.114 00:34:03.114 ' 00:34:03.114 11:01:10 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:03.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.114 --rc genhtml_branch_coverage=1 00:34:03.114 --rc genhtml_function_coverage=1 00:34:03.114 --rc genhtml_legend=1 00:34:03.114 --rc geninfo_all_blocks=1 00:34:03.114 --rc geninfo_unexecuted_blocks=1 00:34:03.114 00:34:03.114 ' 00:34:03.114 11:01:10 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:03.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.114 --rc genhtml_branch_coverage=1 00:34:03.114 --rc genhtml_function_coverage=1 00:34:03.114 --rc genhtml_legend=1 00:34:03.114 --rc geninfo_all_blocks=1 00:34:03.114 --rc geninfo_unexecuted_blocks=1 00:34:03.114 00:34:03.114 ' 00:34:03.114 11:01:10 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:03.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.114 --rc genhtml_branch_coverage=1 00:34:03.114 --rc genhtml_function_coverage=1 00:34:03.114 --rc genhtml_legend=1 00:34:03.114 --rc geninfo_all_blocks=1 00:34:03.114 --rc geninfo_unexecuted_blocks=1 00:34:03.114 00:34:03.114 ' 00:34:03.114 11:01:10 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.114 11:01:10 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.114 11:01:10 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.114 11:01:10 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.114 11:01:10 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.114 11:01:10 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:03.114 11:01:10 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:03.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.114 11:01:10 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:03.114 11:01:10 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:03.114 11:01:10 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:03.114 11:01:10 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:03.114 11:01:10 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.114 11:01:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:03.114 11:01:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:03.114 11:01:10 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.114 11:01:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:09.692 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:09.692 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:09.692 Found net devices under 0000:86:00.0: cvl_0_0 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:09.692 Found net devices under 0000:86:00.1: cvl_0_1 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.692 11:01:15 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.693 11:01:15 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.693 11:01:15 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.693 11:01:15 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.693 11:01:15 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.693 11:01:15 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.693 11:01:15 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.693 11:01:15 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:34:09.693 00:34:09.693 --- 10.0.0.2 ping statistics --- 00:34:09.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.693 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:34:09.693 00:34:09.693 --- 10.0.0.1 ping statistics --- 00:34:09.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.693 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:09.693 11:01:16 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:11.601 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:11.601 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:11.601 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:11.601 11:01:19 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.601 11:01:19 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.601 11:01:19 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.601 11:01:19 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.601 11:01:19 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.864 11:01:19 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.864 11:01:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:11.864 11:01:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:11.864 11:01:19 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.864 11:01:19 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.864 11:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:11.864 11:01:19 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1945158 00:34:11.864 11:01:19 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1945158 00:34:11.864 11:01:19 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:11.864 11:01:19 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1945158 ']' 00:34:11.864 11:01:19 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.864 11:01:19 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.864 11:01:19 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.864 11:01:19 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.864 11:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:11.864 [2024-11-19 11:01:19.148216] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:34:11.864 [2024-11-19 11:01:19.148281] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.864 [2024-11-19 11:01:19.228402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.864 [2024-11-19 11:01:19.270152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.864 [2024-11-19 11:01:19.270189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.864 [2024-11-19 11:01:19.270196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.864 [2024-11-19 11:01:19.270202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.864 [2024-11-19 11:01:19.270207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.864 [2024-11-19 11:01:19.270799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:12.124 11:01:19 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.124 11:01:19 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.124 11:01:19 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:12.124 11:01:19 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.124 [2024-11-19 11:01:19.407275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.124 11:01:19 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.124 11:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.124 ************************************ 00:34:12.124 START TEST fio_dif_1_default 00:34:12.124 ************************************ 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:12.124 bdev_null0 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:12.124 [2024-11-19 11:01:19.483590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:12.124 { 00:34:12.124 "params": { 00:34:12.124 "name": "Nvme$subsystem", 00:34:12.124 "trtype": "$TEST_TRANSPORT", 00:34:12.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.124 "adrfam": "ipv4", 00:34:12.124 "trsvcid": "$NVMF_PORT", 00:34:12.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.124 "hdgst": ${hdgst:-false}, 00:34:12.124 "ddgst": ${ddgst:-false} 00:34:12.124 }, 00:34:12.124 "method": "bdev_nvme_attach_controller" 00:34:12.124 } 00:34:12.124 EOF 00:34:12.124 )") 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:12.124 "params": { 00:34:12.124 "name": "Nvme0", 00:34:12.124 "trtype": "tcp", 00:34:12.124 "traddr": "10.0.0.2", 00:34:12.124 "adrfam": "ipv4", 00:34:12.124 "trsvcid": "4420", 00:34:12.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.124 "hdgst": false, 00:34:12.124 "ddgst": false 00:34:12.124 }, 00:34:12.124 "method": "bdev_nvme_attach_controller" 00:34:12.124 }' 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.124 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.125 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:12.125 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:12.125 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:12.125 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:12.125 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:12.125 11:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.696 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:12.696 fio-3.35 00:34:12.696 Starting 1 thread 00:34:24.896 00:34:24.896 filename0: (groupid=0, jobs=1): err= 0: pid=1945491: Tue Nov 19 11:01:30 2024 00:34:24.896 read: IOPS=203, BW=813KiB/s (833kB/s)(8160KiB/10033msec) 00:34:24.896 slat (nsec): min=5852, max=24762, avg=6137.99, stdev=826.14 00:34:24.896 clat (usec): min=372, max=45999, avg=19653.84, stdev=20384.96 00:34:24.896 lat (usec): min=378, max=46024, avg=19659.98, stdev=20384.92 00:34:24.896 clat percentiles (usec): 00:34:24.896 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 408], 00:34:24.896 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 603], 60.00th=[40633], 00:34:24.896 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:24.896 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:34:24.896 | 99.99th=[45876] 00:34:24.896 bw ( KiB/s): min= 768, max= 960, per=100.00%, avg=814.40, stdev=54.42, samples=20 00:34:24.896 iops : min= 192, max= 240, avg=203.60, stdev=13.60, samples=20 00:34:24.896 lat (usec) : 500=43.33%, 750=9.61% 00:34:24.896 lat (msec) : 50=47.06% 00:34:24.896 cpu : usr=91.93%, sys=7.81%, ctx=13, majf=0, minf=0 00:34:24.896 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.896 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.896 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:24.896 00:34:24.896 Run status group 0 (all jobs): 00:34:24.896 READ: bw=813KiB/s (833kB/s), 813KiB/s-813KiB/s (833kB/s-833kB/s), io=8160KiB (8356kB), run=10033-10033msec 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 00:34:24.896 real 0m11.142s 00:34:24.896 user 0m15.845s 00:34:24.896 sys 0m1.079s 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 ************************************ 00:34:24.896 END TEST fio_dif_1_default 00:34:24.896 ************************************ 00:34:24.896 11:01:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:24.896 11:01:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:24.896 11:01:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 ************************************ 00:34:24.896 START TEST fio_dif_1_multi_subsystems 00:34:24.896 ************************************ 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 bdev_null0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 [2024-11-19 11:01:30.696773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 bdev_null1 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.896 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:24.897 { 00:34:24.897 "params": { 00:34:24.897 "name": "Nvme$subsystem", 00:34:24.897 "trtype": "$TEST_TRANSPORT", 00:34:24.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.897 "adrfam": "ipv4", 00:34:24.897 "trsvcid": "$NVMF_PORT", 00:34:24.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.897 "hdgst": ${hdgst:-false}, 00:34:24.897 "ddgst": ${ddgst:-false} 00:34:24.897 }, 00:34:24.897 "method": "bdev_nvme_attach_controller" 00:34:24.897 } 00:34:24.897 EOF 00:34:24.897 )") 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:24.897 { 00:34:24.897 "params": { 00:34:24.897 "name": "Nvme$subsystem", 00:34:24.897 "trtype": "$TEST_TRANSPORT", 00:34:24.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.897 "adrfam": "ipv4", 00:34:24.897 "trsvcid": "$NVMF_PORT", 00:34:24.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.897 "hdgst": ${hdgst:-false}, 00:34:24.897 "ddgst": ${ddgst:-false} 00:34:24.897 }, 00:34:24.897 "method": "bdev_nvme_attach_controller" 00:34:24.897 } 00:34:24.897 EOF 00:34:24.897 )") 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:24.897 "params": { 00:34:24.897 "name": "Nvme0", 00:34:24.897 "trtype": "tcp", 00:34:24.897 "traddr": "10.0.0.2", 00:34:24.897 "adrfam": "ipv4", 00:34:24.897 "trsvcid": "4420", 00:34:24.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.897 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.897 "hdgst": false, 00:34:24.897 "ddgst": false 00:34:24.897 }, 00:34:24.897 "method": "bdev_nvme_attach_controller" 00:34:24.897 },{ 00:34:24.897 "params": { 00:34:24.897 "name": "Nvme1", 00:34:24.897 "trtype": "tcp", 00:34:24.897 "traddr": "10.0.0.2", 00:34:24.897 "adrfam": "ipv4", 00:34:24.897 "trsvcid": "4420", 00:34:24.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:24.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:24.897 "hdgst": false, 00:34:24.897 "ddgst": false 00:34:24.897 }, 00:34:24.897 "method": "bdev_nvme_attach_controller" 00:34:24.897 }' 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:24.897 11:01:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.897 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:24.897 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:24.897 fio-3.35 00:34:24.897 Starting 2 threads 00:34:34.870 00:34:34.870 filename0: (groupid=0, jobs=1): err= 0: pid=1947456: Tue Nov 19 11:01:41 2024 00:34:34.870 read: IOPS=190, BW=764KiB/s (782kB/s)(7648KiB/10016msec) 00:34:34.870 slat (nsec): min=6126, max=41269, avg=7090.16, stdev=2022.65 00:34:34.870 clat (usec): min=370, max=42584, avg=20931.80, stdev=20527.49 00:34:34.870 lat (usec): min=376, max=42591, avg=20938.89, stdev=20526.92 00:34:34.870 clat percentiles (usec): 00:34:34.870 | 1.00th=[ 383], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:34:34.870 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 611], 60.00th=[41157], 00:34:34.870 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:34.871 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:34.871 | 99.99th=[42730] 00:34:34.871 bw ( KiB/s): min= 704, max= 832, per=66.43%, avg=763.20, stdev=26.01, samples=20 00:34:34.871 iops : min= 176, max= 208, avg=190.80, stdev= 6.50, samples=20 00:34:34.871 lat (usec) : 500=49.16%, 750=0.84% 00:34:34.871 lat (msec) : 50=50.00% 00:34:34.871 cpu : usr=96.81%, sys=2.90%, ctx=22, majf=0, minf=9 00:34:34.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.871 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:34.871 filename1: (groupid=0, jobs=1): err= 0: pid=1947457: Tue Nov 19 11:01:41 2024 00:34:34.871 read: IOPS=96, BW=385KiB/s (395kB/s)(3856KiB/10008msec) 00:34:34.871 slat (nsec): min=6100, max=43162, avg=7677.75, stdev=2631.17 00:34:34.871 clat (usec): min=40787, max=42089, avg=41500.92, stdev=496.32 00:34:34.871 lat (usec): min=40793, max=42132, avg=41508.59, stdev=496.40 00:34:34.871 clat percentiles (usec): 00:34:34.871 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:34.871 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:34:34.871 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:34.871 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:34.871 | 99.99th=[42206] 00:34:34.871 bw ( KiB/s): min= 384, max= 384, per=33.43%, avg=384.00, stdev= 0.00, samples=20 00:34:34.871 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=20 00:34:34.871 lat (msec) : 50=100.00% 00:34:34.871 cpu : usr=96.69%, sys=3.04%, ctx=9, majf=0, minf=9 00:34:34.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.871 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:34.871 00:34:34.871 Run status group 0 (all jobs): 00:34:34.871 READ: bw=1149KiB/s (1176kB/s), 385KiB/s-764KiB/s (395kB/s-782kB/s), io=11.2MiB (11.8MB), run=10008-10016msec 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.871 00:34:34.871 real 0m11.437s 00:34:34.871 user 0m26.547s 00:34:34.871 sys 0m0.905s 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 ************************************ 00:34:34.871 END TEST fio_dif_1_multi_subsystems 00:34:34.871 ************************************ 00:34:34.871 11:01:42 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:34.871 11:01:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:34.871 11:01:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 ************************************ 00:34:34.871 START TEST fio_dif_rand_params 00:34:34.871 ************************************ 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 bdev_null0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:34.871 [2024-11-19 11:01:42.206285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:34.871 { 00:34:34.871 "params": { 00:34:34.871 "name": "Nvme$subsystem", 00:34:34.871 "trtype": "$TEST_TRANSPORT", 00:34:34.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:34.871 "adrfam": "ipv4", 00:34:34.871 "trsvcid": "$NVMF_PORT", 00:34:34.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:34.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:34.871 "hdgst": ${hdgst:-false}, 00:34:34.871 "ddgst": ${ddgst:-false} 00:34:34.871 }, 00:34:34.871 "method": "bdev_nvme_attach_controller" 00:34:34.871 } 00:34:34.871 EOF 00:34:34.871 )") 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:34.871 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:34.872 "params": { 00:34:34.872 "name": "Nvme0", 00:34:34.872 "trtype": "tcp", 00:34:34.872 "traddr": "10.0.0.2", 00:34:34.872 "adrfam": "ipv4", 00:34:34.872 "trsvcid": "4420", 00:34:34.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:34.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:34.872 "hdgst": false, 00:34:34.872 "ddgst": false 00:34:34.872 }, 00:34:34.872 "method": "bdev_nvme_attach_controller" 00:34:34.872 }' 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:34.872 11:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.437 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:35.437 ... 00:34:35.437 fio-3.35 00:34:35.437 Starting 3 threads 00:34:42.008 00:34:42.008 filename0: (groupid=0, jobs=1): err= 0: pid=1949417: Tue Nov 19 11:01:48 2024 00:34:42.008 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(199MiB/5044msec) 00:34:42.008 slat (nsec): min=6247, max=28994, avg=10974.50, stdev=1745.86 00:34:42.008 clat (usec): min=4831, max=49597, avg=9477.26, stdev=3457.62 00:34:42.008 lat (usec): min=4842, max=49606, avg=9488.24, stdev=3457.63 00:34:42.008 clat percentiles (usec): 00:34:42.008 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 8160], 00:34:42.008 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:34:42.008 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11469], 00:34:42.008 | 99.00th=[12911], 99.50th=[45876], 99.90th=[49546], 99.95th=[49546], 00:34:42.008 | 99.99th=[49546] 00:34:42.008 bw ( KiB/s): min=32768, max=45312, per=33.91%, avg=40627.20, stdev=3238.28, samples=10 00:34:42.008 iops : min= 256, max= 354, avg=317.40, stdev=25.30, samples=10 00:34:42.008 lat (msec) : 10=67.55%, 20=31.76%, 50=0.69% 00:34:42.008 cpu : usr=94.75%, sys=4.98%, ctx=11, majf=0, minf=18 00:34:42.008 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.008 issued rwts: total=1590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.008 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.008 filename0: (groupid=0, jobs=1): err= 0: pid=1949418: Tue Nov 19 11:01:48 2024 00:34:42.008 read: IOPS=336, BW=42.0MiB/s (44.1MB/s)(212MiB/5046msec) 00:34:42.008 slat (nsec): min=6255, max=33098, avg=10699.22, stdev=1966.95 00:34:42.008 clat (usec): min=3131, max=51307, avg=8887.15, stdev=4542.30 00:34:42.008 lat (usec): min=3138, max=51319, avg=8897.85, stdev=4542.35 00:34:42.008 clat percentiles (usec): 00:34:42.008 | 1.00th=[ 3556], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 7570], 00:34:42.008 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8848], 00:34:42.008 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10421], 00:34:42.008 | 99.00th=[47449], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:34:42.008 | 99.99th=[51119] 00:34:42.008 bw ( KiB/s): min=40192, max=49920, per=36.17%, avg=43340.80, stdev=3055.48, samples=10 00:34:42.008 iops : min= 314, max= 390, avg=338.60, stdev=23.87, samples=10 00:34:42.008 lat (msec) : 4=3.12%, 10=86.67%, 20=9.02%, 50=0.88%, 100=0.29% 00:34:42.008 cpu : usr=94.69%, sys=5.03%, ctx=9, majf=0, minf=79 00:34:42.008 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.008 issued rwts: total=1696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.008 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.008 filename0: (groupid=0, jobs=1): err= 0: pid=1949419: Tue Nov 19 11:01:48 2024 00:34:42.009 read: IOPS=284, BW=35.6MiB/s (37.3MB/s)(180MiB/5044msec) 00:34:42.009 slat (nsec): min=6288, max=39279, avg=10976.24, stdev=1784.33 00:34:42.009 clat (usec): min=3027, max=51740, avg=10486.71, stdev=5729.56 00:34:42.009 lat (usec): min=3034, max=51747, avg=10497.69, stdev=5729.50 00:34:42.009 clat percentiles (usec): 00:34:42.009 | 1.00th=[ 5735], 5.00th=[ 6980], 10.00th=[ 7898], 20.00th=[ 8717], 00:34:42.009 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:34:42.009 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[11994], 00:34:42.009 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:34:42.009 | 99.99th=[51643] 00:34:42.009 bw ( KiB/s): min=28672, max=40704, per=30.66%, avg=36736.00, stdev=3784.13, samples=10 00:34:42.009 iops : min= 224, max= 318, avg=287.00, stdev=29.56, samples=10 00:34:42.009 lat (msec) : 4=0.63%, 10=54.56%, 20=42.80%, 50=1.39%, 100=0.63% 00:34:42.009 cpu : usr=94.80%, sys=4.90%, ctx=11, majf=0, minf=57 00:34:42.009 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.009 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.009 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.009 00:34:42.009 Run status group 0 (all jobs): 00:34:42.009 READ: bw=117MiB/s (123MB/s), 35.6MiB/s-42.0MiB/s (37.3MB/s-44.1MB/s), io=590MiB (619MB), run=5044-5046msec 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 bdev_null0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 [2024-11-19 11:01:48.510051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 bdev_null1 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 bdev_null2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:42.009 { 00:34:42.009 "params": { 00:34:42.009 "name": "Nvme$subsystem", 00:34:42.009 "trtype": "$TEST_TRANSPORT", 00:34:42.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.009 "adrfam": "ipv4", 00:34:42.009 "trsvcid": "$NVMF_PORT", 00:34:42.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.009 "hdgst": ${hdgst:-false}, 00:34:42.009 "ddgst": ${ddgst:-false} 00:34:42.009 }, 00:34:42.009 "method": "bdev_nvme_attach_controller" 00:34:42.009 } 00:34:42.009 EOF 00:34:42.009 )") 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.009 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:42.010 { 00:34:42.010 "params": { 00:34:42.010 "name": "Nvme$subsystem", 00:34:42.010 "trtype": "$TEST_TRANSPORT", 00:34:42.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.010 "adrfam": "ipv4", 00:34:42.010 "trsvcid": "$NVMF_PORT", 00:34:42.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.010 "hdgst": ${hdgst:-false}, 00:34:42.010 "ddgst": ${ddgst:-false} 00:34:42.010 }, 00:34:42.010 "method": "bdev_nvme_attach_controller" 00:34:42.010 } 00:34:42.010 EOF 00:34:42.010 )") 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:42.010 { 00:34:42.010 "params": { 00:34:42.010 "name": "Nvme$subsystem", 00:34:42.010 "trtype": "$TEST_TRANSPORT", 00:34:42.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.010 "adrfam": "ipv4", 00:34:42.010 "trsvcid": "$NVMF_PORT", 00:34:42.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.010 "hdgst": ${hdgst:-false}, 00:34:42.010 "ddgst": ${ddgst:-false} 00:34:42.010 }, 00:34:42.010 "method": "bdev_nvme_attach_controller" 00:34:42.010 } 00:34:42.010 EOF 00:34:42.010 )") 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:42.010 "params": { 00:34:42.010 "name": "Nvme0", 00:34:42.010 "trtype": "tcp", 00:34:42.010 "traddr": "10.0.0.2", 00:34:42.010 "adrfam": "ipv4", 00:34:42.010 "trsvcid": "4420", 00:34:42.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.010 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.010 "hdgst": false, 00:34:42.010 "ddgst": false 00:34:42.010 }, 00:34:42.010 "method": "bdev_nvme_attach_controller" 00:34:42.010 },{ 00:34:42.010 "params": { 00:34:42.010 "name": "Nvme1", 00:34:42.010 "trtype": "tcp", 00:34:42.010 "traddr": "10.0.0.2", 00:34:42.010 "adrfam": "ipv4", 00:34:42.010 "trsvcid": "4420", 00:34:42.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:42.010 "hdgst": false, 00:34:42.010 "ddgst": false 00:34:42.010 }, 00:34:42.010 "method": "bdev_nvme_attach_controller" 00:34:42.010 },{ 00:34:42.010 "params": { 00:34:42.010 "name": "Nvme2", 00:34:42.010 "trtype": "tcp", 00:34:42.010 "traddr": "10.0.0.2", 00:34:42.010 "adrfam": "ipv4", 00:34:42.010 "trsvcid": "4420", 00:34:42.010 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:42.010 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:42.010 "hdgst": false, 00:34:42.010 "ddgst": false 00:34:42.010 }, 00:34:42.010 "method": "bdev_nvme_attach_controller" 00:34:42.010 }' 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:42.010 11:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.010 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:42.010 ... 00:34:42.010 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:42.010 ... 00:34:42.010 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:42.010 ... 00:34:42.010 fio-3.35 00:34:42.010 Starting 24 threads 00:34:54.205 00:34:54.205 filename0: (groupid=0, jobs=1): err= 0: pid=1950487: Tue Nov 19 11:01:59 2024 00:34:54.205 read: IOPS=571, BW=2286KiB/s (2341kB/s)(22.4MiB/10018msec) 00:34:54.205 slat (nsec): min=6943, max=64196, avg=20839.67, stdev=6831.53 00:34:54.205 clat (usec): min=10416, max=30087, avg=27798.80, stdev=1226.65 00:34:54.205 lat (usec): min=10463, max=30138, avg=27819.64, stdev=1226.15 00:34:54.205 clat percentiles (usec): 00:34:54.205 | 1.00th=[20579], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:54.205 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.205 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:54.205 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:34:54.205 | 99.99th=[30016] 00:34:54.205 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2282.95, stdev=47.72, samples=19 00:34:54.205 iops : min= 544, max= 576, avg=570.74, stdev=11.93, samples=19 00:34:54.205 lat (msec) : 20=0.80%, 50=99.20% 00:34:54.205 cpu : usr=98.46%, sys=1.18%, ctx=16, majf=0, minf=9 00:34:54.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.205 issued rwts: total=5726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.205 filename0: (groupid=0, jobs=1): err= 0: pid=1950488: Tue Nov 19 11:01:59 2024 00:34:54.205 read: IOPS=579, BW=2317KiB/s (2373kB/s)(22.7MiB/10026msec) 00:34:54.205 slat (nsec): min=7035, max=58805, avg=16543.09, stdev=6520.53 00:34:54.205 clat (usec): min=2283, max=43004, avg=27484.04, stdev=3287.52 00:34:54.205 lat (usec): min=2298, max=43017, avg=27500.58, stdev=3287.30 00:34:54.205 clat percentiles (usec): 00:34:54.205 | 1.00th=[ 3982], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:54.205 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.205 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:54.205 | 99.00th=[28967], 99.50th=[29492], 99.90th=[41157], 99.95th=[41681], 00:34:54.205 | 99.99th=[43254] 00:34:54.205 bw ( KiB/s): min= 2176, max= 3072, per=4.23%, avg=2316.80, stdev=185.26, samples=20 00:34:54.205 iops : min= 544, max= 768, avg=579.20, stdev=46.31, samples=20 00:34:54.205 lat (msec) : 4=1.02%, 10=0.48%, 20=1.08%, 50=97.42% 00:34:54.205 cpu : usr=98.34%, sys=1.31%, ctx=14, majf=0, minf=9 00:34:54.205 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:54.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.205 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.205 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.205 filename0: (groupid=0, jobs=1): err= 0: pid=1950489: Tue Nov 19 11:01:59 2024 00:34:54.205 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10004msec) 00:34:54.205 slat (nsec): min=5212, max=88658, avg=32594.36, stdev=17698.55 00:34:54.205 clat (usec): min=11660, max=70981, avg=27850.58, stdev=1714.74 00:34:54.205 lat (usec): min=11667, max=70999, avg=27883.18, stdev=1713.95 00:34:54.205 clat percentiles (usec): 00:34:54.205 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:54.205 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.205 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:54.205 | 99.00th=[29492], 99.50th=[30016], 99.90th=[50070], 99.95th=[50070], 00:34:54.205 | 99.99th=[70779] 00:34:54.205 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2271.40, stdev=68.08, samples=20 00:34:54.205 iops : min= 513, max= 576, avg=567.85, stdev=17.02, samples=20 00:34:54.206 lat (msec) : 20=0.56%, 50=99.35%, 100=0.09% 00:34:54.206 cpu : usr=98.45%, sys=1.19%, ctx=11, majf=0, minf=9 00:34:54.206 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:34:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 issued rwts: total=5694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.206 filename0: (groupid=0, jobs=1): err= 0: pid=1950490: Tue Nov 19 11:01:59 2024 00:34:54.206 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:34:54.206 slat (usec): min=7, max=101, avg=38.89, stdev=22.65 00:34:54.206 clat (usec): min=12526, max=47988, avg=27704.36, stdev=1407.13 00:34:54.206 lat (usec): min=12541, max=48006, avg=27743.25, stdev=1407.10 00:34:54.206 clat percentiles (usec): 00:34:54.206 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:34:54.206 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:34:54.206 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:34:54.206 | 99.00th=[28705], 99.50th=[29492], 99.90th=[47973], 99.95th=[47973], 00:34:54.206 | 99.99th=[47973] 00:34:54.206 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2272.00, stdev=70.42, samples=20 00:34:54.206 iops : min= 512, max= 576, avg=568.00, stdev=17.60, samples=20 00:34:54.206 lat (msec) : 20=0.28%, 50=99.72% 00:34:54.206 cpu : usr=98.65%, sys=1.01%, ctx=11, majf=0, minf=9 00:34:54.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.206 filename0: (groupid=0, jobs=1): err= 0: pid=1950491: Tue Nov 19 11:01:59 2024 00:34:54.206 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:34:54.206 slat (usec): min=7, max=101, avg=33.31, stdev=22.47 00:34:54.206 clat (usec): min=20964, max=35500, avg=27854.37, stdev=548.82 00:34:54.206 lat (usec): min=21019, max=35516, avg=27887.69, stdev=541.72 00:34:54.206 clat percentiles (usec): 00:34:54.206 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:54.206 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.206 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:54.206 | 99.00th=[28967], 99.50th=[29492], 99.90th=[32375], 99.95th=[35390], 00:34:54.206 | 99.99th=[35390] 00:34:54.206 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=57.91, samples=19 00:34:54.206 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:54.206 lat (msec) : 50=100.00% 00:34:54.206 cpu : usr=98.44%, sys=1.20%, ctx=13, majf=0, minf=9 00:34:54.206 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.206 filename0: (groupid=0, jobs=1): err= 0: pid=1950492: Tue Nov 19 11:01:59 2024 00:34:54.206 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:34:54.206 slat (usec): min=4, max=102, avg=39.82, stdev=22.47 00:34:54.206 clat (usec): min=12485, max=48120, avg=27703.90, stdev=1446.48 00:34:54.206 lat (usec): min=12499, max=48133, avg=27743.73, stdev=1446.14 00:34:54.206 clat percentiles (usec): 00:34:54.206 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:34:54.206 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:34:54.206 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:34:54.206 | 99.00th=[29230], 99.50th=[29492], 99.90th=[47973], 99.95th=[47973], 00:34:54.206 | 99.99th=[47973] 00:34:54.206 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2272.00, stdev=70.42, samples=20 00:34:54.206 iops : min= 512, max= 576, avg=568.00, stdev=17.60, samples=20 00:34:54.206 lat (msec) : 20=0.28%, 50=99.72% 00:34:54.206 cpu : usr=98.74%, sys=0.92%, ctx=11, majf=0, minf=9 00:34:54.206 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.206 filename0: (groupid=0, jobs=1): err= 0: pid=1950493: Tue Nov 19 11:01:59 2024 00:34:54.206 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:34:54.206 slat (usec): min=6, max=101, avg=37.85, stdev=22.95 00:34:54.206 clat (usec): min=21104, max=34244, avg=27818.02, stdev=551.41 00:34:54.206 lat (usec): min=21171, max=34260, avg=27855.86, stdev=545.97 00:34:54.206 clat percentiles (usec): 00:34:54.206 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:54.206 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:34:54.206 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:54.206 | 99.00th=[28967], 99.50th=[29492], 99.90th=[34341], 99.95th=[34341], 00:34:54.206 | 99.99th=[34341] 00:34:54.206 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:34:54.206 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:34:54.206 lat (msec) : 50=100.00% 00:34:54.206 cpu : usr=98.56%, sys=1.08%, ctx=14, majf=0, minf=9 00:34:54.206 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.206 filename0: (groupid=0, jobs=1): err= 0: pid=1950494: Tue Nov 19 11:01:59 2024 00:34:54.206 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:34:54.206 slat (nsec): min=7782, max=64874, avg=20774.37, stdev=6155.74 00:34:54.206 clat (usec): min=11749, max=30081, avg=27774.99, stdev=1443.50 00:34:54.206 lat (usec): min=11770, max=30095, avg=27795.76, stdev=1443.24 00:34:54.206 clat percentiles (usec): 00:34:54.206 | 1.00th=[18220], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:54.206 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.206 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:54.206 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:34:54.206 | 99.99th=[30016] 00:34:54.206 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:34:54.206 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:34:54.206 lat (msec) : 20=1.12%, 50=98.88% 00:34:54.206 cpu : usr=98.46%, sys=1.18%, ctx=10, majf=0, minf=9 00:34:54.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.206 filename1: (groupid=0, jobs=1): err= 0: pid=1950495: Tue Nov 19 11:01:59 2024 00:34:54.206 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:34:54.206 slat (usec): min=8, max=105, avg=40.70, stdev=22.29 00:34:54.206 clat (usec): min=11667, max=47939, avg=27720.99, stdev=1423.27 00:34:54.206 lat (usec): min=11675, max=47952, avg=27761.69, stdev=1422.28 00:34:54.206 clat percentiles (usec): 00:34:54.206 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:34:54.206 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:34:54.206 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:34:54.206 | 99.00th=[28967], 99.50th=[29492], 99.90th=[47973], 99.95th=[47973], 00:34:54.206 | 99.99th=[47973] 00:34:54.206 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2272.00, stdev=70.42, samples=20 00:34:54.206 iops : min= 512, max= 576, avg=568.00, stdev=17.60, samples=20 00:34:54.206 lat (msec) : 20=0.28%, 50=99.72% 00:34:54.206 cpu : usr=98.60%, sys=1.05%, ctx=16, majf=0, minf=9 00:34:54.206 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.206 filename1: (groupid=0, jobs=1): err= 0: pid=1950496: Tue Nov 19 11:01:59 2024 00:34:54.206 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10006msec) 00:34:54.206 slat (usec): min=6, max=100, avg=15.83, stdev=14.63 00:34:54.206 clat (usec): min=9490, max=32948, avg=27828.89, stdev=1555.88 00:34:54.206 lat (usec): min=9517, max=32957, avg=27844.72, stdev=1554.48 00:34:54.206 clat percentiles (usec): 00:34:54.206 | 1.00th=[17957], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:54.206 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.206 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:54.206 | 99.00th=[28967], 99.50th=[29230], 99.90th=[32900], 99.95th=[32900], 00:34:54.206 | 99.99th=[32900] 00:34:54.206 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:34:54.206 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:34:54.206 lat (msec) : 10=0.28%, 20=0.84%, 50=98.88% 00:34:54.206 cpu : usr=98.56%, sys=1.08%, ctx=14, majf=0, minf=9 00:34:54.206 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.206 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.206 filename1: (groupid=0, jobs=1): err= 0: pid=1950497: Tue Nov 19 11:01:59 2024 00:34:54.207 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:34:54.207 slat (usec): min=4, max=102, avg=37.72, stdev=22.64 00:34:54.207 clat (usec): min=12557, max=47713, avg=27711.90, stdev=1395.61 00:34:54.207 lat (usec): min=12571, max=47728, avg=27749.61, stdev=1395.56 00:34:54.207 clat percentiles (usec): 00:34:54.207 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:34:54.207 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:34:54.207 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:34:54.207 | 99.00th=[28705], 99.50th=[29492], 99.90th=[47449], 99.95th=[47449], 00:34:54.207 | 99.99th=[47973] 00:34:54.207 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2272.20, stdev=69.75, samples=20 00:34:54.207 iops : min= 513, max= 576, avg=568.05, stdev=17.44, samples=20 00:34:54.207 lat (msec) : 20=0.28%, 50=99.72% 00:34:54.207 cpu : usr=98.61%, sys=1.03%, ctx=13, majf=0, minf=9 00:34:54.207 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.207 filename1: (groupid=0, jobs=1): err= 0: pid=1950498: Tue Nov 19 11:01:59 2024 00:34:54.207 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:34:54.207 slat (nsec): min=6993, max=58248, avg=20321.09, stdev=5832.31 00:34:54.207 clat (usec): min=11771, max=30032, avg=27784.26, stdev=1440.07 00:34:54.207 lat (usec): min=11778, max=30048, avg=27804.58, stdev=1439.87 00:34:54.207 clat percentiles (usec): 00:34:54.207 | 1.00th=[18220], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:54.207 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.207 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:54.207 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:34:54.207 | 99.99th=[30016] 00:34:54.207 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:34:54.207 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:34:54.207 lat (msec) : 20=1.12%, 50=98.88% 00:34:54.207 cpu : usr=98.56%, sys=1.08%, ctx=8, majf=0, minf=9 00:34:54.207 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.207 filename1: (groupid=0, jobs=1): err= 0: pid=1950499: Tue Nov 19 11:01:59 2024 00:34:54.207 read: IOPS=568, BW=2276KiB/s (2331kB/s)(22.2MiB/10011msec) 00:34:54.207 slat (usec): min=4, max=104, avg=41.01, stdev=22.33 00:34:54.207 clat (usec): min=12240, max=54849, avg=27766.06, stdev=1705.09 00:34:54.207 lat (usec): min=12287, max=54862, avg=27807.07, stdev=1703.15 00:34:54.207 clat percentiles (usec): 00:34:54.207 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:34:54.207 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:34:54.207 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:54.207 | 99.00th=[28705], 99.50th=[29492], 99.90th=[54789], 99.95th=[54789], 00:34:54.207 | 99.99th=[54789] 00:34:54.207 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2270.32, stdev=71.93, samples=19 00:34:54.207 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:34:54.207 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:34:54.207 cpu : usr=98.39%, sys=1.26%, ctx=14, majf=0, minf=9 00:34:54.207 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.207 filename1: (groupid=0, jobs=1): err= 0: pid=1950500: Tue Nov 19 11:01:59 2024 00:34:54.207 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:34:54.207 slat (usec): min=7, max=107, avg=36.97, stdev=23.32 00:34:54.207 clat (usec): min=20778, max=32863, avg=27839.69, stdev=638.39 00:34:54.207 lat (usec): min=20822, max=32898, avg=27876.66, stdev=633.05 00:34:54.207 clat percentiles (usec): 00:34:54.207 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:54.207 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.207 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:54.207 | 99.00th=[29230], 99.50th=[32375], 99.90th=[32637], 99.95th=[32900], 00:34:54.207 | 99.99th=[32900] 00:34:54.207 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=54.88, samples=19 00:34:54.207 iops : min= 544, max= 576, avg=567.58, stdev=13.72, samples=19 00:34:54.207 lat (msec) : 50=100.00% 00:34:54.207 cpu : usr=98.40%, sys=1.24%, ctx=11, majf=0, minf=9 00:34:54.207 IO depths : 1=4.0%, 2=10.2%, 4=24.9%, 8=52.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:34:54.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.207 filename1: (groupid=0, jobs=1): err= 0: pid=1950501: Tue Nov 19 11:01:59 2024 00:34:54.207 read: IOPS=573, BW=2292KiB/s (2347kB/s)(22.4MiB/10024msec) 00:34:54.207 slat (nsec): min=7005, max=56145, avg=15777.30, stdev=5262.40 00:34:54.207 clat (usec): min=9546, max=36125, avg=27784.76, stdev=1608.96 00:34:54.207 lat (usec): min=9593, max=36138, avg=27800.54, stdev=1608.54 00:34:54.207 clat percentiles (usec): 00:34:54.207 | 1.00th=[17695], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:54.207 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.207 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:54.207 | 99.00th=[28705], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:34:54.207 | 99.99th=[35914] 00:34:54.207 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2291.20, stdev=57.24, samples=20 00:34:54.207 iops : min= 544, max= 608, avg=572.80, stdev=14.31, samples=20 00:34:54.207 lat (msec) : 10=0.28%, 20=1.15%, 50=98.57% 00:34:54.207 cpu : usr=98.65%, sys=1.00%, ctx=14, majf=0, minf=9 00:34:54.207 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:54.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.207 filename1: (groupid=0, jobs=1): err= 0: pid=1950502: Tue Nov 19 11:01:59 2024 00:34:54.207 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:34:54.207 slat (usec): min=4, max=101, avg=38.93, stdev=22.32 00:34:54.207 clat (usec): min=12427, max=48040, avg=27710.75, stdev=1416.97 00:34:54.207 lat (usec): min=12441, max=48053, avg=27749.68, stdev=1416.67 00:34:54.207 clat percentiles (usec): 00:34:54.207 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:34:54.207 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:34:54.207 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:34:54.207 | 99.00th=[28705], 99.50th=[29492], 99.90th=[47973], 99.95th=[47973], 00:34:54.207 | 99.99th=[47973] 00:34:54.207 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2272.00, stdev=70.42, samples=20 00:34:54.207 iops : min= 512, max= 576, avg=568.00, stdev=17.60, samples=20 00:34:54.207 lat (msec) : 20=0.28%, 50=99.72% 00:34:54.207 cpu : usr=98.42%, sys=1.23%, ctx=15, majf=0, minf=9 00:34:54.207 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.207 filename2: (groupid=0, jobs=1): err= 0: pid=1950503: Tue Nov 19 11:01:59 2024 00:34:54.207 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:34:54.207 slat (nsec): min=7564, max=81303, avg=33283.76, stdev=13274.24 00:34:54.207 clat (usec): min=12712, max=47939, avg=27809.30, stdev=1390.58 00:34:54.207 lat (usec): min=12754, max=47952, avg=27842.58, stdev=1389.64 00:34:54.207 clat percentiles (usec): 00:34:54.207 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:34:54.207 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:34:54.207 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:54.207 | 99.00th=[28967], 99.50th=[29492], 99.90th=[47973], 99.95th=[47973], 00:34:54.207 | 99.99th=[47973] 00:34:54.207 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2272.00, stdev=70.42, samples=20 00:34:54.207 iops : min= 512, max= 576, avg=568.00, stdev=17.60, samples=20 00:34:54.207 lat (msec) : 20=0.28%, 50=99.72% 00:34:54.207 cpu : usr=98.38%, sys=1.25%, ctx=76, majf=0, minf=9 00:34:54.207 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.207 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.207 filename2: (groupid=0, jobs=1): err= 0: pid=1950504: Tue Nov 19 11:01:59 2024 00:34:54.207 read: IOPS=568, BW=2276KiB/s (2331kB/s)(22.2MiB/10011msec) 00:34:54.207 slat (nsec): min=4621, max=35129, avg=15553.96, stdev=5401.46 00:34:54.207 clat (usec): min=14710, max=54398, avg=27965.26, stdev=1238.15 00:34:54.207 lat (usec): min=14718, max=54411, avg=27980.81, stdev=1238.15 00:34:54.207 clat percentiles (usec): 00:34:54.207 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:54.207 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.207 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:54.207 | 99.00th=[28967], 99.50th=[30278], 99.90th=[44303], 99.95th=[44303], 00:34:54.207 | 99.99th=[54264] 00:34:54.208 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=57.91, samples=19 00:34:54.208 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:54.208 lat (msec) : 20=0.35%, 50=99.61%, 100=0.04% 00:34:54.208 cpu : usr=98.71%, sys=0.94%, ctx=10, majf=0, minf=9 00:34:54.208 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:54.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.208 filename2: (groupid=0, jobs=1): err= 0: pid=1950505: Tue Nov 19 11:01:59 2024 00:34:54.208 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:34:54.208 slat (nsec): min=5598, max=98428, avg=29658.06, stdev=19827.32 00:34:54.208 clat (usec): min=20581, max=35541, avg=27874.93, stdev=557.70 00:34:54.208 lat (usec): min=20603, max=35557, avg=27904.59, stdev=551.96 00:34:54.208 clat percentiles (usec): 00:34:54.208 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:34:54.208 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.208 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:34:54.208 | 99.00th=[28967], 99.50th=[29492], 99.90th=[32375], 99.95th=[32637], 00:34:54.208 | 99.99th=[35390] 00:34:54.208 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=57.91, samples=19 00:34:54.208 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:54.208 lat (msec) : 50=100.00% 00:34:54.208 cpu : usr=98.33%, sys=1.28%, ctx=15, majf=0, minf=9 00:34:54.208 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.208 filename2: (groupid=0, jobs=1): err= 0: pid=1950506: Tue Nov 19 11:01:59 2024 00:34:54.208 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:34:54.208 slat (nsec): min=7623, max=56695, avg=20172.69, stdev=5821.77 00:34:54.208 clat (usec): min=11761, max=30009, avg=27785.72, stdev=1446.70 00:34:54.208 lat (usec): min=11773, max=30021, avg=27805.89, stdev=1446.30 00:34:54.208 clat percentiles (usec): 00:34:54.208 | 1.00th=[18220], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:54.208 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.208 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:54.208 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:34:54.208 | 99.99th=[30016] 00:34:54.208 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:34:54.208 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:34:54.208 lat (msec) : 20=1.12%, 50=98.88% 00:34:54.208 cpu : usr=98.56%, sys=1.08%, ctx=14, majf=0, minf=9 00:34:54.208 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.208 filename2: (groupid=0, jobs=1): err= 0: pid=1950507: Tue Nov 19 11:01:59 2024 00:34:54.208 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10010msec) 00:34:54.208 slat (nsec): min=7033, max=77586, avg=15650.33, stdev=6669.14 00:34:54.208 clat (usec): min=11719, max=29900, avg=27832.45, stdev=1453.29 00:34:54.208 lat (usec): min=11737, max=29913, avg=27848.10, stdev=1452.15 00:34:54.208 clat percentiles (usec): 00:34:54.208 | 1.00th=[18482], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:54.208 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.208 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:54.208 | 99.00th=[28967], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:34:54.208 | 99.99th=[30016] 00:34:54.208 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:34:54.208 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:34:54.208 lat (msec) : 20=1.12%, 50=98.88% 00:34:54.208 cpu : usr=98.52%, sys=1.13%, ctx=14, majf=0, minf=9 00:34:54.208 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.208 filename2: (groupid=0, jobs=1): err= 0: pid=1950508: Tue Nov 19 11:01:59 2024 00:34:54.208 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.4MiB/10012msec) 00:34:54.208 slat (nsec): min=6761, max=85593, avg=17887.09, stdev=6862.68 00:34:54.208 clat (usec): min=12686, max=46809, avg=27844.80, stdev=1887.61 00:34:54.208 lat (usec): min=12736, max=46837, avg=27862.69, stdev=1887.51 00:34:54.208 clat percentiles (usec): 00:34:54.208 | 1.00th=[19268], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:54.208 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.208 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:54.208 | 99.00th=[32900], 99.50th=[36439], 99.90th=[46400], 99.95th=[46924], 00:34:54.208 | 99.99th=[46924] 00:34:54.208 bw ( KiB/s): min= 2176, max= 2400, per=4.17%, avg=2282.11, stdev=58.72, samples=19 00:34:54.208 iops : min= 544, max= 600, avg=570.53, stdev=14.68, samples=19 00:34:54.208 lat (msec) : 20=1.66%, 50=98.34% 00:34:54.208 cpu : usr=98.36%, sys=1.29%, ctx=9, majf=0, minf=9 00:34:54.208 IO depths : 1=3.1%, 2=8.7%, 4=23.9%, 8=54.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:34:54.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 issued rwts: total=5724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.208 filename2: (groupid=0, jobs=1): err= 0: pid=1950509: Tue Nov 19 11:01:59 2024 00:34:54.208 read: IOPS=581, BW=2325KiB/s (2381kB/s)(22.8MiB/10018msec) 00:34:54.208 slat (nsec): min=6933, max=51586, avg=15475.05, stdev=5393.38 00:34:54.208 clat (usec): min=2150, max=37987, avg=27389.06, stdev=3533.61 00:34:54.208 lat (usec): min=2186, max=38002, avg=27404.53, stdev=3533.25 00:34:54.208 clat percentiles (usec): 00:34:54.208 | 1.00th=[ 2933], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:54.208 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:54.208 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:54.208 | 99.00th=[28705], 99.50th=[29230], 99.90th=[36439], 99.95th=[36439], 00:34:54.208 | 99.99th=[38011] 00:34:54.208 bw ( KiB/s): min= 2176, max= 3200, per=4.24%, avg=2323.20, stdev=212.87, samples=20 00:34:54.208 iops : min= 544, max= 800, avg=580.80, stdev=53.22, samples=20 00:34:54.208 lat (msec) : 4=1.37%, 10=0.48%, 20=1.34%, 50=96.81% 00:34:54.208 cpu : usr=98.43%, sys=1.20%, ctx=21, majf=0, minf=9 00:34:54.208 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:54.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 issued rwts: total=5824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.208 filename2: (groupid=0, jobs=1): err= 0: pid=1950510: Tue Nov 19 11:01:59 2024 00:34:54.208 read: IOPS=575, BW=2304KiB/s (2359kB/s)(22.5MiB/10004msec) 00:34:54.208 slat (usec): min=6, max=102, avg=32.96, stdev=22.99 00:34:54.208 clat (usec): min=7303, max=49972, avg=27525.60, stdev=2738.16 00:34:54.208 lat (usec): min=7310, max=49984, avg=27558.56, stdev=2738.97 00:34:54.208 clat percentiles (usec): 00:34:54.208 | 1.00th=[16581], 5.00th=[23200], 10.00th=[27132], 20.00th=[27395], 00:34:54.208 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:34:54.208 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:54.208 | 99.00th=[36439], 99.50th=[39584], 99.90th=[50070], 99.95th=[50070], 00:34:54.208 | 99.99th=[50070] 00:34:54.208 bw ( KiB/s): min= 2052, max= 2432, per=4.20%, avg=2298.60, stdev=78.11, samples=20 00:34:54.208 iops : min= 513, max= 608, avg=574.65, stdev=19.53, samples=20 00:34:54.208 lat (msec) : 10=0.10%, 20=2.19%, 50=97.71% 00:34:54.208 cpu : usr=98.62%, sys=1.01%, ctx=14, majf=0, minf=9 00:34:54.208 IO depths : 1=2.5%, 2=5.7%, 4=13.5%, 8=65.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:34:54.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 complete : 0=0.0%, 4=91.7%, 8=5.0%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.208 issued rwts: total=5762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.208 00:34:54.208 Run status group 0 (all jobs): 00:34:54.208 READ: bw=53.5MiB/s (56.1MB/s), 2276KiB/s-2325KiB/s (2331kB/s-2381kB/s), io=536MiB (562MB), run=10003-10026msec 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.208 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 bdev_null0 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 [2024-11-19 11:02:00.123380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 bdev_null1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:54.209 { 00:34:54.209 "params": { 00:34:54.209 "name": "Nvme$subsystem", 00:34:54.209 "trtype": "$TEST_TRANSPORT", 00:34:54.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:54.209 "adrfam": "ipv4", 00:34:54.209 "trsvcid": "$NVMF_PORT", 00:34:54.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:54.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:54.209 "hdgst": ${hdgst:-false}, 00:34:54.209 "ddgst": ${ddgst:-false} 00:34:54.209 }, 00:34:54.209 "method": "bdev_nvme_attach_controller" 00:34:54.209 } 00:34:54.209 EOF 00:34:54.209 )") 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:54.209 { 00:34:54.209 "params": { 00:34:54.209 "name": "Nvme$subsystem", 00:34:54.209 "trtype": "$TEST_TRANSPORT", 00:34:54.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:54.209 "adrfam": "ipv4", 00:34:54.209 "trsvcid": "$NVMF_PORT", 00:34:54.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:54.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:54.209 "hdgst": ${hdgst:-false}, 00:34:54.209 "ddgst": ${ddgst:-false} 00:34:54.209 }, 00:34:54.209 "method": "bdev_nvme_attach_controller" 00:34:54.209 } 00:34:54.209 EOF 00:34:54.209 )") 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:54.209 11:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:54.209 "params": { 00:34:54.210 "name": "Nvme0", 00:34:54.210 "trtype": "tcp", 00:34:54.210 "traddr": "10.0.0.2", 00:34:54.210 "adrfam": "ipv4", 00:34:54.210 "trsvcid": "4420", 00:34:54.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:54.210 "hdgst": false, 00:34:54.210 "ddgst": false 00:34:54.210 }, 00:34:54.210 "method": "bdev_nvme_attach_controller" 00:34:54.210 },{ 00:34:54.210 "params": { 00:34:54.210 "name": "Nvme1", 00:34:54.210 "trtype": "tcp", 00:34:54.210 "traddr": "10.0.0.2", 00:34:54.210 "adrfam": "ipv4", 00:34:54.210 "trsvcid": "4420", 00:34:54.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:54.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:54.210 "hdgst": false, 00:34:54.210 "ddgst": false 00:34:54.210 }, 00:34:54.210 "method": "bdev_nvme_attach_controller" 00:34:54.210 }' 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:54.210 11:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.210 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:54.210 ... 00:34:54.210 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:54.210 ... 00:34:54.210 fio-3.35 00:34:54.210 Starting 4 threads 00:34:59.487 00:34:59.487 filename0: (groupid=0, jobs=1): err= 0: pid=1952456: Tue Nov 19 11:02:06 2024 00:34:59.487 read: IOPS=2830, BW=22.1MiB/s (23.2MB/s)(111MiB/5002msec) 00:34:59.487 slat (nsec): min=6126, max=40376, avg=9403.77, stdev=3394.48 00:34:59.487 clat (usec): min=910, max=5638, avg=2794.82, stdev=409.46 00:34:59.487 lat (usec): min=921, max=5649, avg=2804.22, stdev=409.72 00:34:59.488 clat percentiles (usec): 00:34:59.488 | 1.00th=[ 1827], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:34:59.488 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2900], 00:34:59.488 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3195], 95.00th=[ 3392], 00:34:59.488 | 99.00th=[ 4047], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[ 4883], 00:34:59.488 | 99.99th=[ 5604] 00:34:59.488 bw ( KiB/s): min=21280, max=24448, per=26.86%, avg=22615.11, stdev=1039.05, samples=9 00:34:59.488 iops : min= 2660, max= 3056, avg=2826.89, stdev=129.88, samples=9 00:34:59.488 lat (usec) : 1000=0.01% 00:34:59.488 lat (msec) : 2=2.07%, 4=96.75%, 10=1.17% 00:34:59.488 cpu : usr=96.42%, sys=3.22%, ctx=30, majf=0, minf=9 00:34:59.488 IO depths : 1=0.3%, 2=12.8%, 4=59.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.488 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.488 issued rwts: total=14160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.488 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:59.488 filename0: (groupid=0, jobs=1): err= 0: pid=1952457: Tue Nov 19 11:02:06 2024 00:34:59.488 read: IOPS=2653, BW=20.7MiB/s (21.7MB/s)(104MiB/5001msec) 00:34:59.488 slat (nsec): min=6131, max=36501, avg=10228.67, stdev=4373.57 00:34:59.488 clat (usec): min=589, max=5718, avg=2982.42, stdev=462.70 00:34:59.488 lat (usec): min=597, max=5735, avg=2992.65, stdev=462.82 00:34:59.488 clat percentiles (usec): 00:34:59.488 | 1.00th=[ 2040], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2638], 00:34:59.488 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3064], 00:34:59.488 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3785], 00:34:59.488 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5473], 00:34:59.488 | 99.99th=[ 5735] 00:34:59.488 bw ( KiB/s): min=20384, max=22464, per=25.05%, avg=21096.22, stdev=623.58, samples=9 00:34:59.488 iops : min= 2548, max= 2808, avg=2637.00, stdev=77.94, samples=9 00:34:59.488 lat (usec) : 750=0.02%, 1000=0.02% 00:34:59.488 lat (msec) : 2=0.82%, 4=95.89%, 10=3.26% 00:34:59.488 cpu : usr=94.48%, sys=3.86%, ctx=228, majf=0, minf=9 00:34:59.488 IO depths : 1=0.1%, 2=8.6%, 4=62.9%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.488 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.488 issued rwts: total=13272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.488 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:59.488 filename1: (groupid=0, jobs=1): err= 0: pid=1952458: Tue Nov 19 11:02:06 2024 00:34:59.488 read: IOPS=2480, BW=19.4MiB/s (20.3MB/s)(96.9MiB/5002msec) 00:34:59.488 slat (nsec): min=6151, max=36223, avg=9522.25, stdev=3660.15 00:34:59.488 clat (usec): min=591, max=5816, avg=3197.00, stdev=526.28 00:34:59.488 lat (usec): min=604, max=5823, avg=3206.52, stdev=525.72 00:34:59.488 clat percentiles (usec): 00:34:59.488 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2868], 00:34:59.488 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3163], 00:34:59.488 | 70.00th=[ 3261], 80.00th=[ 3490], 90.00th=[ 3884], 95.00th=[ 4228], 00:34:59.488 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5538], 99.95th=[ 5735], 00:34:59.488 | 99.99th=[ 5800] 00:34:59.488 bw ( KiB/s): min=18768, max=21136, per=23.65%, avg=19911.11, stdev=731.78, samples=9 00:34:59.488 iops : min= 2346, max= 2642, avg=2488.89, stdev=91.47, samples=9 00:34:59.488 lat (usec) : 750=0.01%, 1000=0.01% 00:34:59.488 lat (msec) : 2=0.64%, 4=91.14%, 10=8.21% 00:34:59.488 cpu : usr=96.44%, sys=3.24%, ctx=12, majf=0, minf=9 00:34:59.488 IO depths : 1=0.3%, 2=3.3%, 4=69.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.488 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.488 issued rwts: total=12406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.488 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:59.488 filename1: (groupid=0, jobs=1): err= 0: pid=1952459: Tue Nov 19 11:02:06 2024 00:34:59.488 read: IOPS=2562, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:34:59.488 slat (nsec): min=6126, max=46252, avg=10118.82, stdev=4329.56 00:34:59.488 clat (usec): min=636, max=5612, avg=3091.23, stdev=486.56 00:34:59.488 lat (usec): min=645, max=5625, avg=3101.35, stdev=486.18 00:34:59.488 clat percentiles (usec): 00:34:59.488 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:34:59.488 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:34:59.488 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3720], 95.00th=[ 4080], 00:34:59.488 | 99.00th=[ 4621], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5473], 00:34:59.488 | 99.99th=[ 5604] 00:34:59.488 bw ( KiB/s): min=19543, max=21552, per=24.52%, avg=20649.67, stdev=693.55, samples=9 00:34:59.488 iops : min= 2442, max= 2694, avg=2581.11, stdev=86.87, samples=9 00:34:59.488 lat (usec) : 750=0.01% 00:34:59.488 lat (msec) : 2=0.69%, 4=93.47%, 10=5.84% 00:34:59.488 cpu : usr=93.18%, sys=4.76%, ctx=231, majf=0, minf=9 00:34:59.488 IO depths : 1=0.4%, 2=6.0%, 4=66.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.488 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.488 issued rwts: total=12813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.488 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:59.488 00:34:59.488 Run status group 0 (all jobs): 00:34:59.488 READ: bw=82.2MiB/s (86.2MB/s), 19.4MiB/s-22.1MiB/s (20.3MB/s-23.2MB/s), io=411MiB (431MB), run=5001-5002msec 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.488 00:34:59.488 real 0m24.411s 00:34:59.488 user 4m51.553s 00:34:59.488 sys 0m5.349s 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.488 11:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:59.488 ************************************ 00:34:59.488 END TEST fio_dif_rand_params 00:34:59.488 ************************************ 00:34:59.488 11:02:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:59.488 11:02:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:59.488 11:02:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.488 11:02:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.488 ************************************ 00:34:59.488 START TEST fio_dif_digest 00:34:59.488 ************************************ 00:34:59.488 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:59.488 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:59.488 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:59.488 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:59.488 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:59.488 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:59.488 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:59.489 bdev_null0 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:59.489 [2024-11-19 11:02:06.698549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:59.489 { 00:34:59.489 "params": { 00:34:59.489 "name": "Nvme$subsystem", 00:34:59.489 "trtype": "$TEST_TRANSPORT", 00:34:59.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.489 "adrfam": "ipv4", 00:34:59.489 "trsvcid": "$NVMF_PORT", 00:34:59.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.489 "hdgst": ${hdgst:-false}, 00:34:59.489 "ddgst": ${ddgst:-false} 00:34:59.489 }, 00:34:59.489 "method": "bdev_nvme_attach_controller" 00:34:59.489 } 00:34:59.489 EOF 00:34:59.489 )") 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:59.489 "params": { 00:34:59.489 "name": "Nvme0", 00:34:59.489 "trtype": "tcp", 00:34:59.489 "traddr": "10.0.0.2", 00:34:59.489 "adrfam": "ipv4", 00:34:59.489 "trsvcid": "4420", 00:34:59.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.489 "hdgst": true, 00:34:59.489 "ddgst": true 00:34:59.489 }, 00:34:59.489 "method": "bdev_nvme_attach_controller" 00:34:59.489 }' 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:59.489 11:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.748 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:59.748 ... 00:34:59.748 fio-3.35 00:34:59.748 Starting 3 threads 00:35:12.049 00:35:12.049 filename0: (groupid=0, jobs=1): err= 0: pid=1953724: Tue Nov 19 11:02:17 2024 00:35:12.049 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10046msec) 00:35:12.049 slat (nsec): min=6447, max=37564, avg=11609.92, stdev=1833.20 00:35:12.049 clat (usec): min=6267, max=51067, avg=10297.02, stdev=1327.70 00:35:12.049 lat (usec): min=6279, max=51079, avg=10308.63, stdev=1327.73 00:35:12.049 clat percentiles (usec): 00:35:12.049 | 1.00th=[ 7504], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:35:12.049 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:35:12.049 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:35:12.049 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13960], 99.95th=[49021], 00:35:12.049 | 99.99th=[51119] 00:35:12.049 bw ( KiB/s): min=35584, max=38656, per=35.11%, avg=37337.60, stdev=1067.78, samples=20 00:35:12.049 iops : min= 278, max= 302, avg=291.70, stdev= 8.34, samples=20 00:35:12.049 lat (msec) : 10=35.15%, 20=64.78%, 50=0.03%, 100=0.03% 00:35:12.049 cpu : usr=94.65%, sys=5.06%, ctx=14, majf=0, minf=0 00:35:12.049 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.049 issued rwts: total=2919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:12.049 filename0: (groupid=0, jobs=1): err= 0: pid=1953725: Tue Nov 19 11:02:17 2024 00:35:12.049 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(341MiB/10045msec) 00:35:12.049 slat (nsec): min=6494, max=35858, avg=11429.12, stdev=1775.79 00:35:12.049 clat (usec): min=7951, max=51989, avg=11013.54, stdev=2233.21 00:35:12.049 lat (usec): min=7963, max=52016, avg=11024.97, stdev=2233.48 00:35:12.049 clat percentiles (usec): 00:35:12.049 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:35:12.049 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:35:12.049 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:35:12.049 | 99.00th=[12911], 99.50th=[13173], 99.90th=[52167], 99.95th=[52167], 00:35:12.049 | 99.99th=[52167] 00:35:12.049 bw ( KiB/s): min=32256, max=35840, per=32.82%, avg=34905.60, stdev=955.25, samples=20 00:35:12.049 iops : min= 252, max= 280, avg=272.70, stdev= 7.46, samples=20 00:35:12.049 lat (msec) : 10=10.33%, 20=89.37%, 50=0.07%, 100=0.22% 00:35:12.049 cpu : usr=94.90%, sys=4.79%, ctx=18, majf=0, minf=11 00:35:12.049 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.049 issued rwts: total=2729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:12.049 filename0: (groupid=0, jobs=1): err= 0: pid=1953726: Tue Nov 19 11:02:17 2024 00:35:12.049 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(337MiB/10043msec) 00:35:12.049 slat (nsec): min=6374, max=41162, avg=11474.46, stdev=1661.75 00:35:12.049 clat (usec): min=6846, max=50058, avg=11138.35, stdev=1297.13 00:35:12.049 lat (usec): min=6858, max=50071, avg=11149.82, stdev=1297.19 00:35:12.049 clat percentiles (usec): 00:35:12.049 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:35:12.049 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:35:12.049 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:35:12.049 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13960], 99.95th=[47973], 00:35:12.049 | 99.99th=[50070] 00:35:12.049 bw ( KiB/s): min=33536, max=36096, per=32.45%, avg=34508.80, stdev=672.71, samples=20 00:35:12.049 iops : min= 262, max= 282, avg=269.60, stdev= 5.26, samples=20 00:35:12.049 lat (msec) : 10=6.00%, 20=93.92%, 50=0.04%, 100=0.04% 00:35:12.049 cpu : usr=94.90%, sys=4.80%, ctx=15, majf=0, minf=9 00:35:12.049 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.049 issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:12.049 00:35:12.049 Run status group 0 (all jobs): 00:35:12.049 READ: bw=104MiB/s (109MB/s), 33.6MiB/s-36.3MiB/s (35.2MB/s-38.1MB/s), io=1043MiB (1094MB), run=10043-10046msec 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.049 00:35:12.049 real 0m11.267s 00:35:12.049 user 0m35.096s 00:35:12.049 sys 0m1.839s 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.049 11:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:12.049 ************************************ 00:35:12.049 END TEST fio_dif_digest 00:35:12.049 ************************************ 00:35:12.049 11:02:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:12.049 11:02:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:12.049 11:02:17 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:12.049 11:02:17 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:12.049 11:02:17 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:12.049 11:02:17 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:12.049 11:02:17 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:12.049 11:02:17 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:12.049 rmmod nvme_tcp 00:35:12.049 rmmod nvme_fabrics 00:35:12.049 rmmod nvme_keyring 00:35:12.049 11:02:18 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:12.049 11:02:18 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:12.049 11:02:18 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:12.049 11:02:18 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1945158 ']' 00:35:12.049 11:02:18 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1945158 00:35:12.049 11:02:18 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1945158 ']' 00:35:12.049 11:02:18 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1945158 00:35:12.049 11:02:18 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:12.049 11:02:18 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.050 11:02:18 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1945158 00:35:12.050 11:02:18 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:12.050 11:02:18 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:12.050 11:02:18 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1945158' 00:35:12.050 killing process with pid 1945158 00:35:12.050 11:02:18 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1945158 00:35:12.050 11:02:18 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1945158 00:35:12.050 11:02:18 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:12.050 11:02:18 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:13.958 Waiting for block devices as requested 00:35:13.958 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:13.958 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:13.958 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:13.958 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:13.958 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:13.958 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:14.218 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:14.218 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:14.218 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:14.477 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:14.477 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:14.477 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:14.477 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:14.737 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:14.737 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:14.737 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:14.997 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:14.997 11:02:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.997 11:02:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:14.997 11:02:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.536 11:02:24 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.536 00:35:17.536 real 1m14.242s 00:35:17.536 user 7m8.635s 00:35:17.536 sys 0m21.171s 00:35:17.536 11:02:24 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.536 11:02:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.536 ************************************ 00:35:17.536 END TEST nvmf_dif 00:35:17.536 ************************************ 00:35:17.536 11:02:24 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:17.536 11:02:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:17.536 11:02:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.536 11:02:24 -- common/autotest_common.sh@10 -- # set +x 00:35:17.536 ************************************ 00:35:17.536 START TEST nvmf_abort_qd_sizes 00:35:17.536 ************************************ 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:17.536 * Looking for test storage... 00:35:17.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:17.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.536 --rc genhtml_branch_coverage=1 00:35:17.536 --rc genhtml_function_coverage=1 00:35:17.536 --rc genhtml_legend=1 00:35:17.536 --rc geninfo_all_blocks=1 00:35:17.536 --rc geninfo_unexecuted_blocks=1 00:35:17.536 00:35:17.536 ' 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:17.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.536 --rc genhtml_branch_coverage=1 00:35:17.536 --rc genhtml_function_coverage=1 00:35:17.536 --rc genhtml_legend=1 00:35:17.536 --rc geninfo_all_blocks=1 00:35:17.536 --rc geninfo_unexecuted_blocks=1 00:35:17.536 00:35:17.536 ' 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:17.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.536 --rc genhtml_branch_coverage=1 00:35:17.536 --rc genhtml_function_coverage=1 00:35:17.536 --rc genhtml_legend=1 00:35:17.536 --rc geninfo_all_blocks=1 00:35:17.536 --rc geninfo_unexecuted_blocks=1 00:35:17.536 00:35:17.536 ' 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:17.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.536 --rc genhtml_branch_coverage=1 00:35:17.536 --rc genhtml_function_coverage=1 00:35:17.536 --rc genhtml_legend=1 00:35:17.536 --rc geninfo_all_blocks=1 00:35:17.536 --rc geninfo_unexecuted_blocks=1 00:35:17.536 00:35:17.536 ' 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.536 11:02:24 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.537 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:22.815 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:22.815 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:22.815 Found net devices under 0000:86:00.0: cvl_0_0 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.815 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:22.816 Found net devices under 0000:86:00.1: cvl_0_1 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:22.816 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:23.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:35:23.075 00:35:23.075 --- 10.0.0.2 ping statistics --- 00:35:23.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.075 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:35:23.075 00:35:23.075 --- 10.0.0.1 ping statistics --- 00:35:23.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.075 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:23.075 11:02:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:26.368 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:26.368 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:26.936 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1961534 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1961534 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1961534 ']' 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.936 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:27.195 [2024-11-19 11:02:34.388708] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:35:27.195 [2024-11-19 11:02:34.388758] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.195 [2024-11-19 11:02:34.469876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:27.195 [2024-11-19 11:02:34.512011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.195 [2024-11-19 11:02:34.512050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.195 [2024-11-19 11:02:34.512058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.195 [2024-11-19 11:02:34.512064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.195 [2024-11-19 11:02:34.512068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.195 [2024-11-19 11:02:34.513540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.195 [2024-11-19 11:02:34.513649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:27.196 [2024-11-19 11:02:34.513776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.196 [2024-11-19 11:02:34.513777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.196 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.196 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:27.196 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.196 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.196 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:27.454 11:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:27.455 11:02:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:27.455 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:27.455 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.455 11:02:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:27.455 ************************************ 00:35:27.455 START TEST spdk_target_abort 00:35:27.455 ************************************ 00:35:27.455 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:27.455 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:27.455 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:27.455 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.455 11:02:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:30.745 spdk_targetn1 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:30.745 [2024-11-19 11:02:37.537874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:30.745 [2024-11-19 11:02:37.589070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:30.745 11:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:34.032 Initializing NVMe Controllers 00:35:34.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:34.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:34.032 Initialization complete. Launching workers. 00:35:34.032 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16338, failed: 0 00:35:34.032 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1329, failed to submit 15009 00:35:34.032 success 741, unsuccessful 588, failed 0 00:35:34.032 11:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:34.032 11:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:37.318 Initializing NVMe Controllers 00:35:37.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:37.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:37.318 Initialization complete. Launching workers. 00:35:37.318 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8564, failed: 0 00:35:37.318 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7328 00:35:37.318 success 337, unsuccessful 899, failed 0 00:35:37.318 11:02:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:37.319 11:02:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.607 Initializing NVMe Controllers 00:35:40.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:40.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:40.607 Initialization complete. Launching workers. 00:35:40.607 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37764, failed: 0 00:35:40.607 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2777, failed to submit 34987 00:35:40.607 success 581, unsuccessful 2196, failed 0 00:35:40.607 11:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:40.607 11:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.607 11:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.607 11:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.607 11:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:40.607 11:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.607 11:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1961534 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1961534 ']' 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1961534 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1961534 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1961534' 00:35:41.546 killing process with pid 1961534 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1961534 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1961534 00:35:41.546 00:35:41.546 real 0m14.208s 00:35:41.546 user 0m54.143s 00:35:41.546 sys 0m2.623s 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.546 ************************************ 00:35:41.546 END TEST spdk_target_abort 00:35:41.546 ************************************ 00:35:41.546 11:02:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:41.546 11:02:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:41.546 11:02:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:41.546 11:02:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:41.546 ************************************ 00:35:41.546 START TEST kernel_target_abort 00:35:41.546 ************************************ 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.546 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:41.547 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:41.547 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:41.547 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:41.547 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:41.547 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:41.547 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:41.547 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:41.547 11:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:41.806 11:02:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:41.806 11:02:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:44.343 Waiting for block devices as requested 00:35:44.343 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:44.604 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:44.604 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:44.604 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:44.863 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:44.863 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:44.863 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:45.122 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:45.122 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:45.122 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:45.122 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:45.381 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:45.381 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:45.381 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:45.641 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:45.641 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:45.641 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:45.901 No valid GPT data, bailing 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:45.901 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:45.901 00:35:45.901 Discovery Log Number of Records 2, Generation counter 2 00:35:45.901 =====Discovery Log Entry 0====== 00:35:45.901 trtype: tcp 00:35:45.901 adrfam: ipv4 00:35:45.901 subtype: current discovery subsystem 00:35:45.901 treq: not specified, sq flow control disable supported 00:35:45.901 portid: 1 00:35:45.901 trsvcid: 4420 00:35:45.901 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:45.901 traddr: 10.0.0.1 00:35:45.901 eflags: none 00:35:45.901 sectype: none 00:35:45.901 =====Discovery Log Entry 1====== 00:35:45.901 trtype: tcp 00:35:45.901 adrfam: ipv4 00:35:45.901 subtype: nvme subsystem 00:35:45.901 treq: not specified, sq flow control disable supported 00:35:45.901 portid: 1 00:35:45.901 trsvcid: 4420 00:35:45.901 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:45.902 traddr: 10.0.0.1 00:35:45.902 eflags: none 00:35:45.902 sectype: none 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.902 11:02:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.193 Initializing NVMe Controllers 00:35:49.193 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:49.193 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:49.193 Initialization complete. Launching workers. 00:35:49.193 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93472, failed: 0 00:35:49.193 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93472, failed to submit 0 00:35:49.193 success 0, unsuccessful 93472, failed 0 00:35:49.193 11:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:49.193 11:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:52.483 Initializing NVMe Controllers 00:35:52.483 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:52.483 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:52.483 Initialization complete. Launching workers. 00:35:52.483 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146299, failed: 0 00:35:52.483 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36682, failed to submit 109617 00:35:52.483 success 0, unsuccessful 36682, failed 0 00:35:52.483 11:02:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:52.483 11:02:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:55.779 Initializing NVMe Controllers 00:35:55.779 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:55.779 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:55.779 Initialization complete. Launching workers. 00:35:55.779 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 135368, failed: 0 00:35:55.779 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33906, failed to submit 101462 00:35:55.779 success 0, unsuccessful 33906, failed 0 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:55.779 11:03:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:58.317 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:58.317 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:59.255 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:59.255 00:35:59.255 real 0m17.512s 00:35:59.255 user 0m9.117s 00:35:59.255 sys 0m5.071s 00:35:59.255 11:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.255 11:03:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:59.255 ************************************ 00:35:59.255 END TEST kernel_target_abort 00:35:59.255 ************************************ 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.255 rmmod nvme_tcp 00:35:59.255 rmmod nvme_fabrics 00:35:59.255 rmmod nvme_keyring 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1961534 ']' 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1961534 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1961534 ']' 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1961534 00:35:59.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1961534) - No such process 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1961534 is not found' 00:35:59.255 Process with pid 1961534 is not found 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:59.255 11:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:02.546 Waiting for block devices as requested 00:36:02.546 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:02.546 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:02.546 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:02.546 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:02.546 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:02.546 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:02.546 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:02.546 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:02.805 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:02.805 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:02.805 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:02.805 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:03.064 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:03.064 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:03.064 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:03.324 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:03.324 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:03.324 11:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.862 11:03:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:05.862 00:36:05.862 real 0m48.297s 00:36:05.862 user 1m7.595s 00:36:05.862 sys 0m16.383s 00:36:05.862 11:03:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:05.862 11:03:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:05.862 ************************************ 00:36:05.862 END TEST nvmf_abort_qd_sizes 00:36:05.862 ************************************ 00:36:05.862 11:03:12 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:05.862 11:03:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:05.862 11:03:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:05.862 11:03:12 -- common/autotest_common.sh@10 -- # set +x 00:36:05.862 ************************************ 00:36:05.862 START TEST keyring_file 00:36:05.862 ************************************ 00:36:05.862 11:03:12 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:05.862 * Looking for test storage... 00:36:05.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:05.862 11:03:12 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:05.862 11:03:12 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:36:05.862 11:03:12 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:05.862 11:03:12 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:05.862 11:03:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:05.862 11:03:13 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:05.862 11:03:13 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:05.862 11:03:13 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:05.862 11:03:13 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:05.862 11:03:13 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:05.862 11:03:13 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:05.862 11:03:13 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:05.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.862 --rc genhtml_branch_coverage=1 00:36:05.862 --rc genhtml_function_coverage=1 00:36:05.862 --rc genhtml_legend=1 00:36:05.862 --rc geninfo_all_blocks=1 00:36:05.862 --rc geninfo_unexecuted_blocks=1 00:36:05.862 00:36:05.862 ' 00:36:05.862 11:03:13 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:05.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.862 --rc genhtml_branch_coverage=1 00:36:05.862 --rc genhtml_function_coverage=1 00:36:05.862 --rc genhtml_legend=1 00:36:05.862 --rc geninfo_all_blocks=1 00:36:05.862 --rc geninfo_unexecuted_blocks=1 00:36:05.862 00:36:05.862 ' 00:36:05.862 11:03:13 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:05.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.862 --rc genhtml_branch_coverage=1 00:36:05.862 --rc genhtml_function_coverage=1 00:36:05.862 --rc genhtml_legend=1 00:36:05.862 --rc geninfo_all_blocks=1 00:36:05.862 --rc geninfo_unexecuted_blocks=1 00:36:05.862 00:36:05.862 ' 00:36:05.862 11:03:13 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:05.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.862 --rc genhtml_branch_coverage=1 00:36:05.862 --rc genhtml_function_coverage=1 00:36:05.862 --rc genhtml_legend=1 00:36:05.862 --rc geninfo_all_blocks=1 00:36:05.862 --rc geninfo_unexecuted_blocks=1 00:36:05.862 00:36:05.862 ' 00:36:05.862 11:03:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:05.862 11:03:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.862 11:03:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.863 11:03:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:05.863 11:03:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.863 11:03:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.863 11:03:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.863 11:03:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.863 11:03:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.863 11:03:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.863 11:03:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:05.863 11:03:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:05.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eztyszJEvm 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eztyszJEvm 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eztyszJEvm 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.eztyszJEvm 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6Yaly3ISLY 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:05.863 11:03:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6Yaly3ISLY 00:36:05.863 11:03:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6Yaly3ISLY 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6Yaly3ISLY 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=1970817 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1970817 00:36:05.863 11:03:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:05.863 11:03:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1970817 ']' 00:36:05.863 11:03:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.863 11:03:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.863 11:03:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.863 11:03:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.863 11:03:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:05.863 [2024-11-19 11:03:13.197825] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:36:05.864 [2024-11-19 11:03:13.197876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970817 ] 00:36:05.864 [2024-11-19 11:03:13.273662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.122 [2024-11-19 11:03:13.316839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.122 11:03:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.122 11:03:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:06.122 11:03:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:06.122 11:03:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.122 11:03:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:06.122 [2024-11-19 11:03:13.523316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.122 null0 00:36:06.122 [2024-11-19 11:03:13.555367] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:06.122 [2024-11-19 11:03:13.555733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.382 11:03:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:06.382 [2024-11-19 11:03:13.587453] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:06.382 request: 00:36:06.382 { 00:36:06.382 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:06.382 "secure_channel": false, 00:36:06.382 "listen_address": { 00:36:06.382 "trtype": "tcp", 00:36:06.382 "traddr": "127.0.0.1", 00:36:06.382 "trsvcid": "4420" 00:36:06.382 }, 00:36:06.382 "method": "nvmf_subsystem_add_listener", 00:36:06.382 "req_id": 1 00:36:06.382 } 00:36:06.382 Got JSON-RPC error response 00:36:06.382 response: 00:36:06.382 { 00:36:06.382 "code": -32602, 00:36:06.382 "message": "Invalid parameters" 00:36:06.382 } 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:06.382 11:03:13 keyring_file -- keyring/file.sh@47 -- # bperfpid=1970828 00:36:06.382 11:03:13 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1970828 /var/tmp/bperf.sock 00:36:06.382 11:03:13 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1970828 ']' 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:06.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.382 11:03:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:06.382 [2024-11-19 11:03:13.643091] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:36:06.382 [2024-11-19 11:03:13.643133] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970828 ] 00:36:06.382 [2024-11-19 11:03:13.718298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.382 [2024-11-19 11:03:13.758886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.640 11:03:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.641 11:03:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:06.641 11:03:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eztyszJEvm 00:36:06.641 11:03:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eztyszJEvm 00:36:06.641 11:03:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6Yaly3ISLY 00:36:06.641 11:03:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6Yaly3ISLY 00:36:06.900 11:03:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:06.900 11:03:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:06.900 11:03:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.900 11:03:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.900 11:03:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:07.158 11:03:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eztyszJEvm == \/\t\m\p\/\t\m\p\.\e\z\t\y\s\z\J\E\v\m ]] 00:36:07.158 11:03:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:07.158 11:03:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:07.158 11:03:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:07.158 11:03:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:07.158 11:03:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.417 11:03:14 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.6Yaly3ISLY == \/\t\m\p\/\t\m\p\.\6\Y\a\l\y\3\I\S\L\Y ]] 00:36:07.417 11:03:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:07.417 11:03:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:07.417 11:03:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:07.417 11:03:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:07.417 11:03:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:07.417 11:03:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.676 11:03:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:07.676 11:03:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:07.676 11:03:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:07.676 11:03:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:07.676 11:03:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:07.676 11:03:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:07.676 11:03:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.676 11:03:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:07.676 11:03:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:07.677 11:03:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:07.938 [2024-11-19 11:03:15.242019] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:07.938 nvme0n1 00:36:07.938 11:03:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:07.938 11:03:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:07.938 11:03:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:07.938 11:03:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:07.938 11:03:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:07.938 11:03:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:08.260 11:03:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:08.260 11:03:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:08.260 11:03:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:08.260 11:03:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:08.260 11:03:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:08.260 11:03:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:08.260 11:03:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:08.581 11:03:15 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:08.581 11:03:15 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:08.581 Running I/O for 1 seconds... 00:36:09.559 18715.00 IOPS, 73.11 MiB/s 00:36:09.559 Latency(us) 00:36:09.559 [2024-11-19T10:03:17.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.559 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:09.559 nvme0n1 : 1.00 18757.15 73.27 0.00 0.00 6811.60 4502.04 16640.45 00:36:09.559 [2024-11-19T10:03:17.008Z] =================================================================================================================== 00:36:09.559 [2024-11-19T10:03:17.008Z] Total : 18757.15 73.27 0.00 0.00 6811.60 4502.04 16640.45 00:36:09.559 { 00:36:09.559 "results": [ 00:36:09.559 { 00:36:09.559 "job": "nvme0n1", 00:36:09.559 "core_mask": "0x2", 00:36:09.559 "workload": "randrw", 00:36:09.559 "percentage": 50, 00:36:09.559 "status": "finished", 00:36:09.559 "queue_depth": 128, 00:36:09.559 "io_size": 4096, 00:36:09.559 "runtime": 1.004577, 00:36:09.559 "iops": 18757.14853117282, 00:36:09.559 "mibps": 73.27011144989383, 00:36:09.559 "io_failed": 0, 00:36:09.559 "io_timeout": 0, 00:36:09.559 "avg_latency_us": 6811.602757614983, 00:36:09.559 "min_latency_us": 4502.038260869565, 00:36:09.559 "max_latency_us": 16640.445217391305 00:36:09.559 } 00:36:09.559 ], 00:36:09.559 "core_count": 1 00:36:09.559 } 00:36:09.559 11:03:16 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:09.559 11:03:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:09.818 11:03:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.818 11:03:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:09.818 11:03:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.818 11:03:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:10.077 11:03:17 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:10.077 11:03:17 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:10.077 11:03:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:10.077 11:03:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:10.077 11:03:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:10.077 11:03:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.077 11:03:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:10.077 11:03:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.077 11:03:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:10.077 11:03:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:10.336 [2024-11-19 11:03:17.627582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:10.336 [2024-11-19 11:03:17.628032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868d00 (107): Transport endpoint is not connected 00:36:10.336 [2024-11-19 11:03:17.629027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868d00 (9): Bad file descriptor 00:36:10.336 [2024-11-19 11:03:17.630028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:10.336 [2024-11-19 11:03:17.630038] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:10.336 [2024-11-19 11:03:17.630046] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:10.336 [2024-11-19 11:03:17.630054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:10.336 request: 00:36:10.336 { 00:36:10.336 "name": "nvme0", 00:36:10.336 "trtype": "tcp", 00:36:10.336 "traddr": "127.0.0.1", 00:36:10.336 "adrfam": "ipv4", 00:36:10.336 "trsvcid": "4420", 00:36:10.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:10.336 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:10.336 "prchk_reftag": false, 00:36:10.336 "prchk_guard": false, 00:36:10.336 "hdgst": false, 00:36:10.336 "ddgst": false, 00:36:10.336 "psk": "key1", 00:36:10.336 "allow_unrecognized_csi": false, 00:36:10.336 "method": "bdev_nvme_attach_controller", 00:36:10.336 "req_id": 1 00:36:10.336 } 00:36:10.336 Got JSON-RPC error response 00:36:10.336 response: 00:36:10.336 { 00:36:10.336 "code": -5, 00:36:10.336 "message": "Input/output error" 00:36:10.336 } 00:36:10.336 11:03:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:10.336 11:03:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:10.336 11:03:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:10.336 11:03:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:10.336 11:03:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:10.336 11:03:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:10.336 11:03:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.336 11:03:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.336 11:03:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:10.336 11:03:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.595 11:03:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:10.595 11:03:17 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:10.595 11:03:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:10.595 11:03:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.595 11:03:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.595 11:03:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:10.595 11:03:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.854 11:03:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:10.854 11:03:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:10.854 11:03:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:10.854 11:03:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:10.854 11:03:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:11.113 11:03:18 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:11.113 11:03:18 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:11.113 11:03:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.372 11:03:18 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:11.372 11:03:18 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.eztyszJEvm 00:36:11.372 11:03:18 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.eztyszJEvm 00:36:11.372 11:03:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:11.372 11:03:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.eztyszJEvm 00:36:11.372 11:03:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:11.372 11:03:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.372 11:03:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:11.372 11:03:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.372 11:03:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eztyszJEvm 00:36:11.372 11:03:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eztyszJEvm 00:36:11.372 [2024-11-19 11:03:18.808126] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eztyszJEvm': 0100660 00:36:11.372 [2024-11-19 11:03:18.808153] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:11.372 request: 00:36:11.372 { 00:36:11.372 "name": "key0", 00:36:11.372 "path": "/tmp/tmp.eztyszJEvm", 00:36:11.372 "method": "keyring_file_add_key", 00:36:11.372 "req_id": 1 00:36:11.372 } 00:36:11.372 Got JSON-RPC error response 00:36:11.372 response: 00:36:11.372 { 00:36:11.372 "code": -1, 00:36:11.372 "message": "Operation not permitted" 00:36:11.372 } 00:36:11.631 11:03:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:11.631 11:03:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:11.631 11:03:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:11.631 11:03:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:11.631 11:03:18 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.eztyszJEvm 00:36:11.631 11:03:18 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eztyszJEvm 00:36:11.631 11:03:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eztyszJEvm 00:36:11.631 11:03:19 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.eztyszJEvm 00:36:11.632 11:03:19 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:11.632 11:03:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:11.632 11:03:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:11.632 11:03:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:11.632 11:03:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:11.632 11:03:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.890 11:03:19 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:11.891 11:03:19 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:11.891 11:03:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:11.891 11:03:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:11.891 11:03:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:11.891 11:03:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.891 11:03:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:11.891 11:03:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.891 11:03:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:11.891 11:03:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.149 [2024-11-19 11:03:19.401699] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.eztyszJEvm': No such file or directory 00:36:12.149 [2024-11-19 11:03:19.401724] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:12.149 [2024-11-19 11:03:19.401739] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:12.149 [2024-11-19 11:03:19.401746] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:12.149 [2024-11-19 11:03:19.401753] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:12.149 [2024-11-19 11:03:19.401758] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:12.149 request: 00:36:12.149 { 00:36:12.149 "name": "nvme0", 00:36:12.149 "trtype": "tcp", 00:36:12.149 "traddr": "127.0.0.1", 00:36:12.149 "adrfam": "ipv4", 00:36:12.149 "trsvcid": "4420", 00:36:12.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:12.149 "prchk_reftag": false, 00:36:12.149 "prchk_guard": false, 00:36:12.149 "hdgst": false, 00:36:12.149 "ddgst": false, 00:36:12.149 "psk": "key0", 00:36:12.149 "allow_unrecognized_csi": false, 00:36:12.149 "method": "bdev_nvme_attach_controller", 00:36:12.149 "req_id": 1 00:36:12.149 } 00:36:12.149 Got JSON-RPC error response 00:36:12.149 response: 00:36:12.149 { 00:36:12.149 "code": -19, 00:36:12.149 "message": "No such device" 00:36:12.149 } 00:36:12.149 11:03:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:12.149 11:03:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:12.150 11:03:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:12.150 11:03:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:12.150 11:03:19 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:12.150 11:03:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:12.409 11:03:19 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cPoME4fYHH 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:12.409 11:03:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:12.409 11:03:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:12.409 11:03:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:12.409 11:03:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:12.409 11:03:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:12.409 11:03:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cPoME4fYHH 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cPoME4fYHH 00:36:12.409 11:03:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.cPoME4fYHH 00:36:12.409 11:03:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cPoME4fYHH 00:36:12.409 11:03:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cPoME4fYHH 00:36:12.667 11:03:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.668 11:03:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.926 nvme0n1 00:36:12.926 11:03:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:12.926 11:03:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:12.926 11:03:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.926 11:03:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.926 11:03:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.926 11:03:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.926 11:03:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:12.926 11:03:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:12.926 11:03:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:13.184 11:03:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:13.184 11:03:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:13.184 11:03:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:13.184 11:03:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.184 11:03:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.443 11:03:20 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:13.443 11:03:20 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:13.443 11:03:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:13.443 11:03:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:13.443 11:03:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.443 11:03:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.443 11:03:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:13.701 11:03:20 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:13.701 11:03:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:13.701 11:03:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:13.960 11:03:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:13.960 11:03:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:13.960 11:03:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.960 11:03:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:13.960 11:03:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cPoME4fYHH 00:36:13.960 11:03:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cPoME4fYHH 00:36:14.219 11:03:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6Yaly3ISLY 00:36:14.220 11:03:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6Yaly3ISLY 00:36:14.479 11:03:21 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:14.479 11:03:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:14.737 nvme0n1 00:36:14.737 11:03:21 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:14.737 11:03:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:14.997 11:03:22 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:14.997 "subsystems": [ 00:36:14.997 { 00:36:14.997 "subsystem": "keyring", 00:36:14.997 "config": [ 00:36:14.997 { 00:36:14.997 "method": "keyring_file_add_key", 00:36:14.997 "params": { 00:36:14.997 "name": "key0", 00:36:14.997 "path": "/tmp/tmp.cPoME4fYHH" 00:36:14.997 } 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "method": "keyring_file_add_key", 00:36:14.997 "params": { 00:36:14.997 "name": "key1", 00:36:14.997 "path": "/tmp/tmp.6Yaly3ISLY" 00:36:14.997 } 00:36:14.997 } 00:36:14.997 ] 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "subsystem": "iobuf", 00:36:14.997 "config": [ 00:36:14.997 { 00:36:14.997 "method": "iobuf_set_options", 00:36:14.997 "params": { 00:36:14.997 "small_pool_count": 8192, 00:36:14.997 "large_pool_count": 1024, 00:36:14.997 "small_bufsize": 8192, 00:36:14.997 "large_bufsize": 135168, 00:36:14.997 "enable_numa": false 00:36:14.997 } 00:36:14.997 } 00:36:14.997 ] 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "subsystem": "sock", 00:36:14.997 "config": [ 00:36:14.997 { 00:36:14.997 "method": "sock_set_default_impl", 00:36:14.997 "params": { 00:36:14.997 "impl_name": "posix" 00:36:14.997 } 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "method": "sock_impl_set_options", 00:36:14.997 "params": { 00:36:14.997 "impl_name": "ssl", 00:36:14.997 "recv_buf_size": 4096, 00:36:14.997 "send_buf_size": 4096, 00:36:14.997 "enable_recv_pipe": true, 00:36:14.997 "enable_quickack": false, 00:36:14.997 "enable_placement_id": 0, 00:36:14.997 "enable_zerocopy_send_server": true, 00:36:14.997 "enable_zerocopy_send_client": false, 00:36:14.997 "zerocopy_threshold": 0, 00:36:14.997 "tls_version": 0, 00:36:14.997 "enable_ktls": false 00:36:14.997 } 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "method": "sock_impl_set_options", 00:36:14.997 "params": { 00:36:14.997 "impl_name": "posix", 00:36:14.997 "recv_buf_size": 2097152, 00:36:14.997 "send_buf_size": 2097152, 00:36:14.997 "enable_recv_pipe": true, 00:36:14.997 "enable_quickack": false, 00:36:14.997 "enable_placement_id": 0, 00:36:14.997 "enable_zerocopy_send_server": true, 00:36:14.997 "enable_zerocopy_send_client": false, 00:36:14.997 "zerocopy_threshold": 0, 00:36:14.997 "tls_version": 0, 00:36:14.997 "enable_ktls": false 00:36:14.997 } 00:36:14.997 } 00:36:14.997 ] 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "subsystem": "vmd", 00:36:14.997 "config": [] 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "subsystem": "accel", 00:36:14.997 "config": [ 00:36:14.997 { 00:36:14.997 "method": "accel_set_options", 00:36:14.997 "params": { 00:36:14.997 "small_cache_size": 128, 00:36:14.997 "large_cache_size": 16, 00:36:14.997 "task_count": 2048, 00:36:14.997 "sequence_count": 2048, 00:36:14.997 "buf_count": 2048 00:36:14.997 } 00:36:14.997 } 00:36:14.997 ] 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "subsystem": "bdev", 00:36:14.997 "config": [ 00:36:14.997 { 00:36:14.997 "method": "bdev_set_options", 00:36:14.997 "params": { 00:36:14.997 "bdev_io_pool_size": 65535, 00:36:14.997 "bdev_io_cache_size": 256, 00:36:14.997 "bdev_auto_examine": true, 00:36:14.997 "iobuf_small_cache_size": 128, 00:36:14.997 "iobuf_large_cache_size": 16 00:36:14.997 } 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "method": "bdev_raid_set_options", 00:36:14.997 "params": { 00:36:14.997 "process_window_size_kb": 1024, 00:36:14.997 "process_max_bandwidth_mb_sec": 0 00:36:14.997 } 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "method": "bdev_iscsi_set_options", 00:36:14.997 "params": { 00:36:14.997 "timeout_sec": 30 00:36:14.997 } 00:36:14.997 }, 00:36:14.997 { 00:36:14.997 "method": "bdev_nvme_set_options", 00:36:14.997 "params": { 00:36:14.997 "action_on_timeout": "none", 00:36:14.997 "timeout_us": 0, 00:36:14.997 "timeout_admin_us": 0, 00:36:14.997 "keep_alive_timeout_ms": 10000, 00:36:14.997 "arbitration_burst": 0, 00:36:14.997 "low_priority_weight": 0, 00:36:14.997 "medium_priority_weight": 0, 00:36:14.997 "high_priority_weight": 0, 00:36:14.997 "nvme_adminq_poll_period_us": 10000, 00:36:14.997 "nvme_ioq_poll_period_us": 0, 00:36:14.997 "io_queue_requests": 512, 00:36:14.997 "delay_cmd_submit": true, 00:36:14.997 "transport_retry_count": 4, 00:36:14.997 "bdev_retry_count": 3, 00:36:14.997 "transport_ack_timeout": 0, 00:36:14.997 "ctrlr_loss_timeout_sec": 0, 00:36:14.997 "reconnect_delay_sec": 0, 00:36:14.997 "fast_io_fail_timeout_sec": 0, 00:36:14.997 "disable_auto_failback": false, 00:36:14.997 "generate_uuids": false, 00:36:14.998 "transport_tos": 0, 00:36:14.998 "nvme_error_stat": false, 00:36:14.998 "rdma_srq_size": 0, 00:36:14.998 "io_path_stat": false, 00:36:14.998 "allow_accel_sequence": false, 00:36:14.998 "rdma_max_cq_size": 0, 00:36:14.998 "rdma_cm_event_timeout_ms": 0, 00:36:14.998 "dhchap_digests": [ 00:36:14.998 "sha256", 00:36:14.998 "sha384", 00:36:14.998 "sha512" 00:36:14.998 ], 00:36:14.998 "dhchap_dhgroups": [ 00:36:14.998 "null", 00:36:14.998 "ffdhe2048", 00:36:14.998 "ffdhe3072", 00:36:14.998 "ffdhe4096", 00:36:14.998 "ffdhe6144", 00:36:14.998 "ffdhe8192" 00:36:14.998 ] 00:36:14.998 } 00:36:14.998 }, 00:36:14.998 { 00:36:14.998 "method": "bdev_nvme_attach_controller", 00:36:14.998 "params": { 00:36:14.998 "name": "nvme0", 00:36:14.998 "trtype": "TCP", 00:36:14.998 "adrfam": "IPv4", 00:36:14.998 "traddr": "127.0.0.1", 00:36:14.998 "trsvcid": "4420", 00:36:14.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:14.998 "prchk_reftag": false, 00:36:14.998 "prchk_guard": false, 00:36:14.998 "ctrlr_loss_timeout_sec": 0, 00:36:14.998 "reconnect_delay_sec": 0, 00:36:14.998 "fast_io_fail_timeout_sec": 0, 00:36:14.998 "psk": "key0", 00:36:14.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:14.998 "hdgst": false, 00:36:14.998 "ddgst": false, 00:36:14.998 "multipath": "multipath" 00:36:14.998 } 00:36:14.998 }, 00:36:14.998 { 00:36:14.998 "method": "bdev_nvme_set_hotplug", 00:36:14.998 "params": { 00:36:14.998 "period_us": 100000, 00:36:14.998 "enable": false 00:36:14.998 } 00:36:14.998 }, 00:36:14.998 { 00:36:14.998 "method": "bdev_wait_for_examine" 00:36:14.998 } 00:36:14.998 ] 00:36:14.998 }, 00:36:14.998 { 00:36:14.998 "subsystem": "nbd", 00:36:14.998 "config": [] 00:36:14.998 } 00:36:14.998 ] 00:36:14.998 }' 00:36:14.998 11:03:22 keyring_file -- keyring/file.sh@115 -- # killprocess 1970828 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1970828 ']' 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1970828 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970828 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970828' 00:36:14.998 killing process with pid 1970828 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@973 -- # kill 1970828 00:36:14.998 Received shutdown signal, test time was about 1.000000 seconds 00:36:14.998 00:36:14.998 Latency(us) 00:36:14.998 [2024-11-19T10:03:22.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.998 [2024-11-19T10:03:22.447Z] =================================================================================================================== 00:36:14.998 [2024-11-19T10:03:22.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:14.998 11:03:22 keyring_file -- common/autotest_common.sh@978 -- # wait 1970828 00:36:15.257 11:03:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=1972346 00:36:15.257 11:03:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1972346 /var/tmp/bperf.sock 00:36:15.257 11:03:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1972346 ']' 00:36:15.257 11:03:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.258 11:03:22 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:15.258 11:03:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.258 11:03:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.258 11:03:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:15.258 "subsystems": [ 00:36:15.258 { 00:36:15.258 "subsystem": "keyring", 00:36:15.258 "config": [ 00:36:15.258 { 00:36:15.258 "method": "keyring_file_add_key", 00:36:15.258 "params": { 00:36:15.258 "name": "key0", 00:36:15.258 "path": "/tmp/tmp.cPoME4fYHH" 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "keyring_file_add_key", 00:36:15.258 "params": { 00:36:15.258 "name": "key1", 00:36:15.258 "path": "/tmp/tmp.6Yaly3ISLY" 00:36:15.258 } 00:36:15.258 } 00:36:15.258 ] 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "subsystem": "iobuf", 00:36:15.258 "config": [ 00:36:15.258 { 00:36:15.258 "method": "iobuf_set_options", 00:36:15.258 "params": { 00:36:15.258 "small_pool_count": 8192, 00:36:15.258 "large_pool_count": 1024, 00:36:15.258 "small_bufsize": 8192, 00:36:15.258 "large_bufsize": 135168, 00:36:15.258 "enable_numa": false 00:36:15.258 } 00:36:15.258 } 00:36:15.258 ] 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "subsystem": "sock", 00:36:15.258 "config": [ 00:36:15.258 { 00:36:15.258 "method": "sock_set_default_impl", 00:36:15.258 "params": { 00:36:15.258 "impl_name": "posix" 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "sock_impl_set_options", 00:36:15.258 "params": { 00:36:15.258 "impl_name": "ssl", 00:36:15.258 "recv_buf_size": 4096, 00:36:15.258 "send_buf_size": 4096, 00:36:15.258 "enable_recv_pipe": true, 00:36:15.258 "enable_quickack": false, 00:36:15.258 "enable_placement_id": 0, 00:36:15.258 "enable_zerocopy_send_server": true, 00:36:15.258 "enable_zerocopy_send_client": false, 00:36:15.258 "zerocopy_threshold": 0, 00:36:15.258 "tls_version": 0, 00:36:15.258 "enable_ktls": false 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "sock_impl_set_options", 00:36:15.258 "params": { 00:36:15.258 "impl_name": "posix", 00:36:15.258 "recv_buf_size": 2097152, 00:36:15.258 "send_buf_size": 2097152, 00:36:15.258 "enable_recv_pipe": true, 00:36:15.258 "enable_quickack": false, 00:36:15.258 "enable_placement_id": 0, 00:36:15.258 "enable_zerocopy_send_server": true, 00:36:15.258 "enable_zerocopy_send_client": false, 00:36:15.258 "zerocopy_threshold": 0, 00:36:15.258 "tls_version": 0, 00:36:15.258 "enable_ktls": false 00:36:15.258 } 00:36:15.258 } 00:36:15.258 ] 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "subsystem": "vmd", 00:36:15.258 "config": [] 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "subsystem": "accel", 00:36:15.258 "config": [ 00:36:15.258 { 00:36:15.258 "method": "accel_set_options", 00:36:15.258 "params": { 00:36:15.258 "small_cache_size": 128, 00:36:15.258 "large_cache_size": 16, 00:36:15.258 "task_count": 2048, 00:36:15.258 "sequence_count": 2048, 00:36:15.258 "buf_count": 2048 00:36:15.258 } 00:36:15.258 } 00:36:15.258 ] 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "subsystem": "bdev", 00:36:15.258 "config": [ 00:36:15.258 { 00:36:15.258 "method": "bdev_set_options", 00:36:15.258 "params": { 00:36:15.258 "bdev_io_pool_size": 65535, 00:36:15.258 "bdev_io_cache_size": 256, 00:36:15.258 "bdev_auto_examine": true, 00:36:15.258 "iobuf_small_cache_size": 128, 00:36:15.258 "iobuf_large_cache_size": 16 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "bdev_raid_set_options", 00:36:15.258 "params": { 00:36:15.258 "process_window_size_kb": 1024, 00:36:15.258 "process_max_bandwidth_mb_sec": 0 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "bdev_iscsi_set_options", 00:36:15.258 "params": { 00:36:15.258 "timeout_sec": 30 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "bdev_nvme_set_options", 00:36:15.258 "params": { 00:36:15.258 "action_on_timeout": "none", 00:36:15.258 "timeout_us": 0, 00:36:15.258 "timeout_admin_us": 0, 00:36:15.258 "keep_alive_timeout_ms": 10000, 00:36:15.258 "arbitration_burst": 0, 00:36:15.258 "low_priority_weight": 0, 00:36:15.258 "medium_priority_weight": 0, 00:36:15.258 "high_priority_weight": 0, 00:36:15.258 "nvme_adminq_poll_period_us": 10000, 00:36:15.258 "nvme_ioq_poll_period_us": 0, 00:36:15.258 "io_queue_requests": 512, 00:36:15.258 "delay_cmd_submit": true, 00:36:15.258 "transport_retry_count": 4, 00:36:15.258 "bdev_retry_count": 3, 00:36:15.258 "transport_ack_timeout": 0, 00:36:15.258 "ctrlr_loss_timeout_sec": 0, 00:36:15.258 "reconnect_delay_sec": 0, 00:36:15.258 "fast_io_fail_timeout_sec": 0, 00:36:15.258 "disable_auto_failback": false, 00:36:15.258 "generate_uuids": false, 00:36:15.258 "transport_tos": 0, 00:36:15.258 "nvme_error_stat": false, 00:36:15.258 "rdma_srq_size": 0, 00:36:15.258 "io_path_stat": false, 00:36:15.258 "allow_accel_sequence": false, 00:36:15.258 "rdma_max_cq_size": 0, 00:36:15.258 "rdma_cm_event_timeout_ms": 0, 00:36:15.258 "dhchap_digests": [ 00:36:15.258 "sha256", 00:36:15.258 "sha384", 00:36:15.258 "sha512" 00:36:15.258 ], 00:36:15.258 "dhchap_dhgroups": [ 00:36:15.258 "null", 00:36:15.258 "ffdhe2048", 00:36:15.258 "ffdhe3072", 00:36:15.258 "ffdhe4096", 00:36:15.258 "ffdhe6144", 00:36:15.258 "ffdhe8192" 00:36:15.258 ] 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "bdev_nvme_attach_controller", 00:36:15.258 "params": { 00:36:15.258 "name": "nvme0", 00:36:15.258 "trtype": "TCP", 00:36:15.258 "adrfam": "IPv4", 00:36:15.258 "traddr": "127.0.0.1", 00:36:15.258 "trsvcid": "4420", 00:36:15.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.258 "prchk_reftag": false, 00:36:15.258 "prchk_guard": false, 00:36:15.258 "ctrlr_loss_timeout_sec": 0, 00:36:15.258 "reconnect_delay_sec": 0, 00:36:15.258 "fast_io_fail_timeout_sec": 0, 00:36:15.258 "psk": "key0", 00:36:15.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.258 "hdgst": false, 00:36:15.258 "ddgst": false, 00:36:15.258 "multipath": "multipath" 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "bdev_nvme_set_hotplug", 00:36:15.258 "params": { 00:36:15.258 "period_us": 100000, 00:36:15.258 "enable": false 00:36:15.258 } 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "method": "bdev_wait_for_examine" 00:36:15.258 } 00:36:15.258 ] 00:36:15.258 }, 00:36:15.258 { 00:36:15.258 "subsystem": "nbd", 00:36:15.258 "config": [] 00:36:15.258 } 00:36:15.258 ] 00:36:15.258 }' 00:36:15.258 11:03:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.258 11:03:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:15.258 [2024-11-19 11:03:22.499653] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:36:15.258 [2024-11-19 11:03:22.499705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972346 ] 00:36:15.258 [2024-11-19 11:03:22.574406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.258 [2024-11-19 11:03:22.616789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.517 [2024-11-19 11:03:22.778354] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:16.085 11:03:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:16.085 11:03:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:16.085 11:03:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:16.085 11:03:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:16.085 11:03:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.344 11:03:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:16.344 11:03:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:16.344 11:03:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:16.344 11:03:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.344 11:03:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:16.603 11:03:23 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:16.603 11:03:23 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:16.603 11:03:23 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:16.603 11:03:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:16.862 11:03:24 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:16.862 11:03:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:16.862 11:03:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.cPoME4fYHH /tmp/tmp.6Yaly3ISLY 00:36:16.862 11:03:24 keyring_file -- keyring/file.sh@20 -- # killprocess 1972346 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1972346 ']' 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1972346 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972346 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972346' 00:36:16.862 killing process with pid 1972346 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@973 -- # kill 1972346 00:36:16.862 Received shutdown signal, test time was about 1.000000 seconds 00:36:16.862 00:36:16.862 Latency(us) 00:36:16.862 [2024-11-19T10:03:24.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.862 [2024-11-19T10:03:24.311Z] =================================================================================================================== 00:36:16.862 [2024-11-19T10:03:24.311Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:16.862 11:03:24 keyring_file -- common/autotest_common.sh@978 -- # wait 1972346 00:36:17.121 11:03:24 keyring_file -- keyring/file.sh@21 -- # killprocess 1970817 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1970817 ']' 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1970817 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970817 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970817' 00:36:17.121 killing process with pid 1970817 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@973 -- # kill 1970817 00:36:17.121 11:03:24 keyring_file -- common/autotest_common.sh@978 -- # wait 1970817 00:36:17.381 00:36:17.381 real 0m11.877s 00:36:17.381 user 0m29.597s 00:36:17.381 sys 0m2.682s 00:36:17.381 11:03:24 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.381 11:03:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:17.381 ************************************ 00:36:17.381 END TEST keyring_file 00:36:17.381 ************************************ 00:36:17.381 11:03:24 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:17.381 11:03:24 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:17.381 11:03:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:17.381 11:03:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:17.381 11:03:24 -- common/autotest_common.sh@10 -- # set +x 00:36:17.381 ************************************ 00:36:17.381 START TEST keyring_linux 00:36:17.381 ************************************ 00:36:17.381 11:03:24 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:17.381 Joined session keyring: 882080680 00:36:17.642 * Looking for test storage... 00:36:17.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.642 --rc genhtml_branch_coverage=1 00:36:17.642 --rc genhtml_function_coverage=1 00:36:17.642 --rc genhtml_legend=1 00:36:17.642 --rc geninfo_all_blocks=1 00:36:17.642 --rc geninfo_unexecuted_blocks=1 00:36:17.642 00:36:17.642 ' 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.642 --rc genhtml_branch_coverage=1 00:36:17.642 --rc genhtml_function_coverage=1 00:36:17.642 --rc genhtml_legend=1 00:36:17.642 --rc geninfo_all_blocks=1 00:36:17.642 --rc geninfo_unexecuted_blocks=1 00:36:17.642 00:36:17.642 ' 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.642 --rc genhtml_branch_coverage=1 00:36:17.642 --rc genhtml_function_coverage=1 00:36:17.642 --rc genhtml_legend=1 00:36:17.642 --rc geninfo_all_blocks=1 00:36:17.642 --rc geninfo_unexecuted_blocks=1 00:36:17.642 00:36:17.642 ' 00:36:17.642 11:03:24 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.642 --rc genhtml_branch_coverage=1 00:36:17.642 --rc genhtml_function_coverage=1 00:36:17.642 --rc genhtml_legend=1 00:36:17.642 --rc geninfo_all_blocks=1 00:36:17.642 --rc geninfo_unexecuted_blocks=1 00:36:17.642 00:36:17.642 ' 00:36:17.642 11:03:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:17.642 11:03:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.642 11:03:24 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.642 11:03:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.642 11:03:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.642 11:03:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.642 11:03:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:17.642 11:03:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:17.642 11:03:24 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:17.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:17.643 11:03:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:17.643 11:03:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:17.643 11:03:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:17.643 11:03:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:17.643 11:03:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:17.643 11:03:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:17.643 11:03:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:17.643 11:03:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:17.643 11:03:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:17.643 11:03:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:17.643 11:03:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:17.643 11:03:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:17.643 11:03:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:17.643 11:03:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:17.643 /tmp/:spdk-test:key0 00:36:17.643 11:03:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:17.643 11:03:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:17.643 11:03:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:17.643 11:03:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:17.643 11:03:25 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:17.643 11:03:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:17.643 11:03:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:17.643 11:03:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:17.643 /tmp/:spdk-test:key1 00:36:17.643 11:03:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1972905 00:36:17.643 11:03:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:17.643 11:03:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1972905 00:36:17.643 11:03:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1972905 ']' 00:36:17.643 11:03:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.643 11:03:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:17.643 11:03:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.643 11:03:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:17.643 11:03:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:17.902 [2024-11-19 11:03:25.131514] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:36:17.902 [2024-11-19 11:03:25.131562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972905 ] 00:36:17.902 [2024-11-19 11:03:25.186072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.902 [2024-11-19 11:03:25.226034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.161 11:03:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:18.161 11:03:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:18.161 11:03:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:18.161 11:03:25 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.161 11:03:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:18.161 [2024-11-19 11:03:25.444860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.161 null0 00:36:18.161 [2024-11-19 11:03:25.476897] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:18.161 [2024-11-19 11:03:25.477282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:18.161 11:03:25 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.161 11:03:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:18.161 142430358 00:36:18.161 11:03:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:18.161 693305687 00:36:18.161 11:03:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1972915 00:36:18.161 11:03:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:18.161 11:03:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1972915 /var/tmp/bperf.sock 00:36:18.161 11:03:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1972915 ']' 00:36:18.161 11:03:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:18.162 11:03:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.162 11:03:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:18.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:18.162 11:03:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.162 11:03:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:18.162 [2024-11-19 11:03:25.549929] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization... 00:36:18.162 [2024-11-19 11:03:25.549989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972915 ] 00:36:18.162 [2024-11-19 11:03:25.608845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.421 [2024-11-19 11:03:25.652527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.421 11:03:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:18.421 11:03:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:18.421 11:03:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:18.421 11:03:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:18.680 11:03:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:18.680 11:03:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:18.939 11:03:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:18.939 11:03:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:18.939 [2024-11-19 11:03:26.316920] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:19.198 nvme0n1 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:19.198 11:03:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:19.198 11:03:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:19.198 11:03:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.198 11:03:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:19.198 11:03:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.458 11:03:26 keyring_linux -- keyring/linux.sh@25 -- # sn=142430358 00:36:19.458 11:03:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:19.458 11:03:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:19.458 11:03:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 142430358 == \1\4\2\4\3\0\3\5\8 ]] 00:36:19.458 11:03:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 142430358 00:36:19.458 11:03:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:19.458 11:03:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:19.458 Running I/O for 1 seconds... 00:36:20.838 21024.00 IOPS, 82.12 MiB/s 00:36:20.838 Latency(us) 00:36:20.838 [2024-11-19T10:03:28.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.838 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:20.838 nvme0n1 : 1.01 21024.35 82.13 0.00 0.00 6067.52 4445.05 9801.91 00:36:20.838 [2024-11-19T10:03:28.287Z] =================================================================================================================== 00:36:20.838 [2024-11-19T10:03:28.287Z] Total : 21024.35 82.13 0.00 0.00 6067.52 4445.05 9801.91 00:36:20.838 { 00:36:20.838 "results": [ 00:36:20.838 { 00:36:20.838 "job": "nvme0n1", 00:36:20.838 "core_mask": "0x2", 00:36:20.838 "workload": "randread", 00:36:20.838 "status": "finished", 00:36:20.838 "queue_depth": 128, 00:36:20.838 "io_size": 4096, 00:36:20.838 "runtime": 1.006119, 00:36:20.838 "iops": 21024.351990172137, 00:36:20.838 "mibps": 82.12637496160991, 00:36:20.838 "io_failed": 0, 00:36:20.838 "io_timeout": 0, 00:36:20.838 "avg_latency_us": 6067.517659125338, 00:36:20.838 "min_latency_us": 4445.050434782609, 00:36:20.838 "max_latency_us": 9801.906086956522 00:36:20.838 } 00:36:20.838 ], 00:36:20.838 "core_count": 1 00:36:20.838 } 00:36:20.838 11:03:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:20.838 11:03:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:20.838 11:03:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:20.838 11:03:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:20.838 11:03:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:20.838 11:03:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:20.838 11:03:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:20.838 11:03:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:21.097 11:03:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:21.097 [2024-11-19 11:03:28.506008] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:21.097 [2024-11-19 11:03:28.506895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebda70 (107): Transport endpoint is not connected 00:36:21.097 [2024-11-19 11:03:28.507891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebda70 (9): Bad file descriptor 00:36:21.097 [2024-11-19 11:03:28.508892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:21.097 [2024-11-19 11:03:28.508901] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:21.097 [2024-11-19 11:03:28.508908] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:21.097 [2024-11-19 11:03:28.508916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:21.097 request: 00:36:21.097 { 00:36:21.097 "name": "nvme0", 00:36:21.097 "trtype": "tcp", 00:36:21.097 "traddr": "127.0.0.1", 00:36:21.097 "adrfam": "ipv4", 00:36:21.097 "trsvcid": "4420", 00:36:21.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.097 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.097 "prchk_reftag": false, 00:36:21.097 "prchk_guard": false, 00:36:21.097 "hdgst": false, 00:36:21.097 "ddgst": false, 00:36:21.097 "psk": ":spdk-test:key1", 00:36:21.097 "allow_unrecognized_csi": false, 00:36:21.097 "method": "bdev_nvme_attach_controller", 00:36:21.097 "req_id": 1 00:36:21.097 } 00:36:21.097 Got JSON-RPC error response 00:36:21.097 response: 00:36:21.097 { 00:36:21.097 "code": -5, 00:36:21.097 "message": "Input/output error" 00:36:21.097 } 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.097 11:03:28 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@33 -- # sn=142430358 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 142430358 00:36:21.097 1 links removed 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:21.097 11:03:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:21.357 11:03:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:21.357 11:03:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:21.357 11:03:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:21.357 11:03:28 keyring_linux -- keyring/linux.sh@33 -- # sn=693305687 00:36:21.357 11:03:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 693305687 00:36:21.357 1 links removed 00:36:21.357 11:03:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1972915 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1972915 ']' 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1972915 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972915 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972915' 00:36:21.357 killing process with pid 1972915 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 1972915 00:36:21.357 Received shutdown signal, test time was about 1.000000 seconds 00:36:21.357 00:36:21.357 Latency(us) 00:36:21.357 [2024-11-19T10:03:28.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.357 [2024-11-19T10:03:28.806Z] =================================================================================================================== 00:36:21.357 [2024-11-19T10:03:28.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 1972915 00:36:21.357 11:03:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1972905 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1972905 ']' 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1972905 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.357 11:03:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972905 00:36:21.616 11:03:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:21.616 11:03:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:21.616 11:03:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972905' 00:36:21.616 killing process with pid 1972905 00:36:21.616 11:03:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 1972905 00:36:21.616 11:03:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 1972905 00:36:21.879 00:36:21.879 real 0m4.326s 00:36:21.879 user 0m8.203s 00:36:21.879 sys 0m1.412s 00:36:21.879 11:03:29 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.879 11:03:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:21.879 ************************************ 00:36:21.879 END TEST keyring_linux 00:36:21.879 ************************************ 00:36:21.879 11:03:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:21.879 11:03:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:21.879 11:03:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:21.879 11:03:29 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:21.880 11:03:29 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:21.880 11:03:29 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:21.880 11:03:29 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:21.880 11:03:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.880 11:03:29 -- common/autotest_common.sh@10 -- # set +x 00:36:21.880 11:03:29 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:21.880 11:03:29 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:21.880 11:03:29 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:21.880 11:03:29 -- common/autotest_common.sh@10 -- # set +x 00:36:27.161 INFO: APP EXITING 00:36:27.161 INFO: killing all VMs 00:36:27.161 INFO: killing vhost app 00:36:27.161 INFO: EXIT DONE 00:36:29.699 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:29.699 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:29.699 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:32.992 Cleaning 00:36:32.992 Removing: /var/run/dpdk/spdk0/config 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:32.992 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:32.992 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:32.992 Removing: /var/run/dpdk/spdk1/config 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:32.992 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:32.992 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:32.992 Removing: /var/run/dpdk/spdk2/config 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:32.992 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:32.992 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:32.992 Removing: /var/run/dpdk/spdk3/config 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:32.992 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:32.992 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:32.992 Removing: /var/run/dpdk/spdk4/config 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:32.992 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:32.992 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:32.992 Removing: /dev/shm/bdev_svc_trace.1 00:36:32.992 Removing: /dev/shm/nvmf_trace.0 00:36:32.992 Removing: /dev/shm/spdk_tgt_trace.pid1495044 00:36:32.992 Removing: /var/run/dpdk/spdk0 00:36:32.992 Removing: /var/run/dpdk/spdk1 00:36:32.992 Removing: /var/run/dpdk/spdk2 00:36:32.992 Removing: /var/run/dpdk/spdk3 00:36:32.992 Removing: /var/run/dpdk/spdk4 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1492893 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1493959 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1495044 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1495679 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1496623 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1496647 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1497706 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1497859 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1498162 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1499752 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1500916 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1501312 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1501538 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1501736 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1501991 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1502242 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1502490 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1502776 00:36:32.992 Removing: /var/run/dpdk/spdk_pid1503519 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1506518 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1506774 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1507030 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1507033 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1507531 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1507538 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1508032 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1508035 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1508393 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1508538 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1508794 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1508800 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1509466 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1509739 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1510035 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1514319 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1518628 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1528886 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1529576 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1533856 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1534110 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1538597 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1544361 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1547093 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1557307 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1566747 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1568585 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1569507 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1586387 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1590351 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1636202 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1641390 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1647187 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1653640 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1653651 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1654562 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1655473 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1656258 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1657001 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1657012 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1657242 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1657478 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1657606 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1658763 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1659481 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1660399 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1661013 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1661078 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1661317 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1662332 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1663322 00:36:32.993 Removing: /var/run/dpdk/spdk_pid1671575 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1700765 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1705276 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1706881 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1708719 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1708740 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1708971 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1709140 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1709632 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1711326 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1712297 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1712705 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1714921 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1715321 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1715917 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1720186 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1725591 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1725593 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1725594 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1729522 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1738458 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1742264 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1748330 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1749553 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1750875 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1752420 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1756907 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1761236 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1765254 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1772636 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1772642 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1777355 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1777583 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1777807 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1778086 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1778275 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1782670 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1783190 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1788103 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1790718 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1796121 00:36:33.252 Removing: /var/run/dpdk/spdk_pid1801463 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1810050 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1817011 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1817031 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1836315 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1836789 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1837477 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1837954 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1838697 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1839306 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1839861 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1840343 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1844592 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1844826 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1850864 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1850950 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1856380 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1860447 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1870158 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1870798 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1874897 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1875174 00:36:33.253 Removing: /var/run/dpdk/spdk_pid1879907 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1885545 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1888125 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1898082 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1906888 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1908564 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1909483 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1925513 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1929708 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1932463 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1940251 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1940363 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1945389 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1947233 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1949115 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1950362 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1952343 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1953413 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1962151 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1962610 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1963072 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1965373 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1965927 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1966472 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1970817 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1970828 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1972346 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1972905 00:36:33.512 Removing: /var/run/dpdk/spdk_pid1972915 00:36:33.512 Clean 00:36:33.512 11:03:40 -- common/autotest_common.sh@1453 -- # return 0 00:36:33.512 11:03:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:33.512 11:03:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:33.512 11:03:40 -- common/autotest_common.sh@10 -- # set +x 00:36:33.512 11:03:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:33.512 11:03:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:33.512 11:03:40 -- common/autotest_common.sh@10 -- # set +x 00:36:33.512 11:03:40 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:33.771 11:03:40 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:33.771 11:03:40 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:33.771 11:03:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:33.772 11:03:40 -- spdk/autotest.sh@398 -- # hostname 00:36:33.772 11:03:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:33.772 geninfo: WARNING: invalid characters removed from testname! 00:36:55.713 11:04:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:58.250 11:04:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:59.628 11:04:07 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:01.532 11:04:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:04.067 11:04:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:05.445 11:04:12 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:07.350 11:04:14 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:07.350 11:04:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:07.351 11:04:14 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:07.351 11:04:14 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:07.351 11:04:14 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:07.351 11:04:14 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:07.351 + [[ -n 1415644 ]] 00:37:07.351 + sudo kill 1415644 00:37:07.619 [Pipeline] } 00:37:07.637 [Pipeline] // stage 00:37:07.642 [Pipeline] } 00:37:07.657 [Pipeline] // timeout 00:37:07.662 [Pipeline] } 00:37:07.675 [Pipeline] // catchError 00:37:07.680 [Pipeline] } 00:37:07.696 [Pipeline] // wrap 00:37:07.703 [Pipeline] } 00:37:07.717 [Pipeline] // catchError 00:37:07.729 [Pipeline] stage 00:37:07.732 [Pipeline] { (Epilogue) 00:37:07.746 [Pipeline] catchError 00:37:07.747 [Pipeline] { 00:37:07.760 [Pipeline] echo 00:37:07.764 Cleanup processes 00:37:07.771 [Pipeline] sh 00:37:08.059 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.059 1983577 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.074 [Pipeline] sh 00:37:08.362 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.362 ++ grep -v 'sudo pgrep' 00:37:08.362 ++ awk '{print $1}' 00:37:08.362 + sudo kill -9 00:37:08.362 + true 00:37:08.374 [Pipeline] sh 00:37:08.660 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:20.945 [Pipeline] sh 00:37:21.232 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:21.232 Artifacts sizes are good 00:37:21.245 [Pipeline] archiveArtifacts 00:37:21.253 Archiving artifacts 00:37:21.387 [Pipeline] sh 00:37:21.673 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:21.687 [Pipeline] cleanWs 00:37:21.697 [WS-CLEANUP] Deleting project workspace... 00:37:21.697 [WS-CLEANUP] Deferred wipeout is used... 00:37:21.703 [WS-CLEANUP] done 00:37:21.705 [Pipeline] } 00:37:21.721 [Pipeline] // catchError 00:37:21.733 [Pipeline] sh 00:37:22.016 + logger -p user.info -t JENKINS-CI 00:37:22.023 [Pipeline] } 00:37:22.033 [Pipeline] // stage 00:37:22.038 [Pipeline] } 00:37:22.050 [Pipeline] // node 00:37:22.053 [Pipeline] End of Pipeline 00:37:22.079 Finished: SUCCESS